Comment 7 for bug 1242942

Revision history for this message
Phil Frost (bitglue) wrote : Re: cinder does not handle missing volume group gracefully

The proper Havana release in in the UCA now, so I've upgraded to that. I'm still able to reproduce this. It's simple for me: I'm deploying with the puppet-openstack modules: https://forge.puppetlabs.com/puppetlabs/openstack. I can't think of anything unusual about the setup -- simply don't create a cinder-volumes volume group, and try to make a volume.

cinder-volume will log, at startup:

2013-10-31 11:45:32.311 15709 TRACE cinder.volume.manager Traceback (most recent call last):
2013-10-31 11:45:32.311 15709 TRACE cinder.volume.manager File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 190, in init_host
2013-10-31 11:45:32.311 15709 TRACE cinder.volume.manager self.driver.check_for_setup_error()
2013-10-31 11:45:32.311 15709 TRACE cinder.volume.manager File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py", line 94, in check_for_setup_error
2013-10-31 11:45:32.311 15709 TRACE cinder.volume.manager raise exception.VolumeBackendAPIException(data=message)
2013-10-31 11:45:32.311 15709 TRACE cinder.volume.manager VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Volume Group cinder-volumes does not exist

then when a volume is asked to be created:

2013-10-31 11:46:11.638 15709 ERROR cinder.openstack.common.rpc.amqp [req-70eebc51-fca6-410d-8caa-9415a4a21530 b7b8f92e13534c2bbd32b0ff1b801b76 a2e59bca1d7a48eb895f4f7806bb89d6] Exception during message handling
2013-10-31 11:46:11.638 15709 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2013-10-31 11:46:11.638 15709 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2013-10-31 11:46:11.638 15709 TRACE cinder.openstack.common.rpc.amqp **args)
2013-10-31 11:46:11.638 15709 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2013-10-31 11:46:11.638 15709 TRACE cinder.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs)
2013-10-31 11:46:11.638 15709 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/utils.py", line 807, in wrapper
2013-10-31 11:46:11.638 15709 TRACE cinder.openstack.common.rpc.amqp raise exception.DriverNotInitialized(driver=driver_name)
2013-10-31 11:46:11.638 15709 TRACE cinder.openstack.common.rpc.amqp DriverNotInitialized: Volume driver 'LVMISCSIDriver' not initialized.

At this point, there's a volume, and it's stuck in "creating", forever:

pfrost@os-controller01:~$ cinder list
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+
| c950bca7-5d31-465f-a0c7-9503f845b265 | creating | test | 1 | None | false | |
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+
pfrost@os-controller01:~$ cinder show c950bca7-5d31-465f-a0c7-9503f845b265
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2013-10-31T15:46:11.000000 |
| display_description | |
| display_name | test |
| id | c950bca7-5d31-465f-a0c7-9503f845b265 |
| metadata | {} |
| os-vol-host-attr:host | os-compute01 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | a2e59bca1d7a48eb895f4f7806bb89d6 |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+--------------------------------+--------------------------------------+
pfrost@os-controller01:~$ cinder delete c950bca7-5d31-465f-a0c7-9503f845b265
ERROR: Invalid volume: Volume status must be available or error, but current status is: creating

It can be force-deleted, but then it's just stuck in "deleting" instead of "creating":

pfrost@os-controller01:~$ cinder force-delete c950bca7-5d31-465f-a0c7-9503f845b265
pfrost@os-controller01:~$ cinder list
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+
| c950bca7-5d31-465f-a0c7-9503f845b265 | deleting | test | 1 | None | false | |
+--------------------------------------+----------+--------------+------+-------------+----------+-------------+

Not sure why else you wouldn't be able to reproduce this, unless it's something that's been fixed very recently. My cinder.conf looks like:

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = False
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rabbit_host=127.0.0.1
sql_connection=mysql://cinder:xxx@127.0.0.1/cinder?charset=utf8
api_paste_config=/etc/cinder/api-paste.ini
debug=False
rabbit_userid=openstack
osapi_volume_listen=0.0.0.0
sql_idle_timeout=3600
rabbit_virtual_host=/
scheduler_driver=cinder.scheduler.simple.SimpleScheduler
rabbit_hosts=127.0.0.1:5672
rabbit_ha_queues=False
rabbit_password=xxx
rabbit_port=5672
rpc_backend=cinder.openstack.common.rpc.impl_kombu