Volumes can't be created without cinder nodes

Bug #1604342 reported by Kyrylo Romanenko
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Fix Committed
High
Maksim Malchuk

Bug Description

Steps:
1. Create cluster with Neutron
2. Add 3 nodes with controller role
3. Add 3 nodes with compute and ceph-osd role
4. Deploy the cluster
5. Check ceph status
6. Run OSTF tests

Expected OSTF tests to pass.
Actual result is Failed 2 OSTF tests:
  - Create volume and boot instance from it (failure) Failed to get to expected status. In error state. Please refer to OpenStack logs for more details.
  - Create volume and attach it to instance (failure) Time limit exceeded while waiting for volume becoming 'in-use' to finish. Please refer to OpenStack logs for more details.

Failed CI jobs:
https://product-ci.infra.mirantis.net/view/10.0-mitaka/job/10.0-mitaka.main.ubuntu.bvt_2/108/
https://product-ci.infra.mirantis.net/view/10.0-mitaka/job/10.0-mitaka.main.ubuntu.bvt_2/109/

Fuel snapshot attached.

Revision history for this message
Kyrylo Romanenko (kromanenko) wrote :
Revision history for this message
Alexei Sheplyakov (asheplyakov) wrote :

The logs (in particular, node-1/commands/ceph_s.txt) indicate that ceph cluster is OK:

[10.109.0.4] out: cluster b64e046f-653f-4e95-848e-1794b8298e98
[10.109.0.4] out: health HEALTH_WARN
[10.109.0.4] out: too many PGs per OSD (352 > max 300)
[10.109.0.4] out: monmap e3: 3 mons at {node-1=10.109.2.3:6789/0,node-4=10.109.2.2:6789/0,node-5=10.109.2.5:6789/0}
[10.109.0.4] out: election epoch 8, quorum 0,1,2 node-4,node-1,node-5
[10.109.0.4] out: osdmap e33: 6 osds: 6 up, 6 in
[10.109.0.4] out: pgmap v101: 704 pgs, 10 pools, 22052 kB data, 52 objects
[10.109.0.4] out: 12727 MB used, 283 GB / 296 GB avail
[10.109.0.4] out: 704 active+clean
[10.109.0.4] out:

Also there's nothing unusual in OSDs' logs (node-2/var/log/ceph/ceph-osd.{1,4}.log, node-3/var/log/ceph/ceph-osd.{0,3}.log, etc), same for the monitors.

Last but not least just because the log file is called "fail_error_ceph_radosgw-blah-blah.tar.gz" does NOT mean the problem has something to do with ceph

Changed in mos:
assignee: MOS Ceph (mos-ceph) → nobody
Revision history for this message
Alexei Sheplyakov (asheplyakov) wrote :

> 1. Create cluster with Neutron
> 2. Add 3 nodes with controller role
> 3. Add 3 nodes with compute and ceph-osd role
> 4. Deploy the cluster

Please add >= 1 cinder nodes to get volumes working.

Changed in mos:
status: New → Invalid
summary: - Failed to create volumes in CephRadosGW cluster configuration
+ Volumes can't be created without cinder nodes
Revision history for this message
Kyrylo Romanenko (kromanenko) wrote :

Ceph-OSD should provide volumes, isnt it?

Settings applied in test are:
settings={
'volumes_lvm': False,
'volumes_ceph': True,
'images_ceph': True,
'objects_ceph': True,
'tenant': 'rados',
'user': 'rados',
'password': 'rados'
}

tags: added: bvt-fail
Revision history for this message
Alexei Sheplyakov (asheplyakov) wrote :

> Ceph-OSD should provide volumes

Ceph cluster can provide *rbd* volumes, however OpenStack can use them only indirectly via cinder

Changed in mos:
status: Invalid → Confirmed
Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

Alexei, you don't need cinder node in case of Ceph. All cinder services should be running on controller(s)

Revision history for this message
Maksim Malchuk (mmalchuk) wrote :
Changed in mos:
assignee: nobody → Maksim Malchuk (mmalchuk)
status: Confirmed → Fix Committed
Revision history for this message
Maksim Malchuk (mmalchuk) wrote :
Revision history for this message
Alexey Deryugin (velovec) wrote :

Finally, the root cause is found: After Ceph pool creation cinder-volume restarts only on one random node. So cinder-volume should just be retarted on all nodes and it works fine.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/fuel-library 10.0.0rc1

This issue was fixed in the openstack/fuel-library 10.0.0rc1 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/fuel-library 10.0.0

This issue was fixed in the openstack/fuel-library 10.0.0 release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.