Activity log for bug #1349888

Date Who What changed Old value New value Message
2014-07-29 14:58:58 git-harry bug added bug
2014-07-29 14:59:14 git-harry nova: assignee git-harry (git-harry)
2014-07-29 15:06:16 OpenStack Infra nova: status New In Progress
2014-07-30 10:23:49 Nikola Đipanov nova: importance Undecided High
2014-07-30 10:23:54 Nikola Đipanov nova: milestone juno-3
2014-07-31 18:38:47 Vish Ishaya tags icehouse-backport-potential
2014-09-01 06:33:46 Takashi Natsume bug added subscriber Takashi NATSUME
2014-09-01 07:56:57 OpenStack Infra nova: assignee git-harry (git-harry) Nikola Đipanov (ndipanov)
2014-09-02 16:03:43 OpenStack Infra nova: assignee Nikola Đipanov (ndipanov) git-harry (git-harry)
2014-09-04 11:52:44 Thierry Carrez nova: milestone juno-3 juno-rc1
2014-09-29 10:12:31 OpenStack Infra nova: assignee git-harry (git-harry) Nikola Đipanov (ndipanov)
2014-09-29 10:18:47 Nikola Đipanov nova: assignee Nikola Đipanov (ndipanov) git-harry (git-harry)
2014-09-30 10:07:38 OpenStack Infra nova: status In Progress Fix Committed
2014-10-01 07:36:44 Thierry Carrez nova: status Fix Committed Fix Released
2014-10-16 08:54:33 Thierry Carrez nova: milestone juno-rc1 2014.2
2015-09-08 07:51:52 Louis Bouchard bug task added nova (Ubuntu)
2015-09-08 07:52:08 Louis Bouchard nominated for series Ubuntu Trusty
2015-09-08 07:52:08 Louis Bouchard bug task added nova (Ubuntu Trusty)
2015-09-08 14:48:13 Edward Hope-Morley nova (Ubuntu): status New In Progress
2015-09-08 14:48:17 Edward Hope-Morley nova (Ubuntu): assignee Edward Hope-Morley (hopem)
2015-09-08 14:48:19 Edward Hope-Morley nova (Ubuntu): importance Undecided High
2015-09-08 14:48:22 Edward Hope-Morley nova (Ubuntu Trusty): assignee Edward Hope-Morley (hopem)
2015-09-08 14:48:24 Edward Hope-Morley nova (Ubuntu Trusty): importance Undecided High
2015-09-08 14:48:28 Edward Hope-Morley nova (Ubuntu Trusty): status New In Progress
2015-09-08 14:48:31 Edward Hope-Morley branch linked lp:~hopem/nova/icehouse-lp1349888
2015-09-08 14:49:48 Edward Hope-Morley summary Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted. [SRU] Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.
2015-09-08 15:15:02 Edward Hope-Morley description nova assumes there is only ever one bdm per volume. When an attach is initiated a new bdm is created, if the attach fails a bdm for the volume is deleted however it is not necessarily the one that was just created. The following steps show how a volume can get stuck detaching because of this. $ nova list c+--------------------------------------+--------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------+--------+------------+-------------+------------------+ | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | - | Running | private=10.0.0.2 | +--------------------------------------+--------+--------+------------+-------------+------------------+ $ cinder list +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 | 1 | lvm1 | false | | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | +----------+--------------------------------------+ $ cinder list +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 | 1 | lvm1 | false | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d) $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 $ cinder list +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | detaching | test10 | 1 | lvm1 | false | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ 2014-07-29 14:47:13.952 ERROR oslo.messaging.rpc.dispatcher [req-134dfd17-14da-4de0-93fc-5d8d7bbf65a5 admin admin] Exception during message handling: <type 'NoneType'> can't be decoded 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 406, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 88, in wrapped 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher payload) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 71, in wrapped 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 291, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher pass 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 277, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 319, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info()) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 307, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4363, in detach_volume 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher self._detach_volume(context, instance, bdm) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4309, in _detach_volume 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher connection_info = jsonutils.loads(bdm.connection_info) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/jsonutils.py", line 176, in loads 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return json.loads(strutils.safe_decode(s, encoding), **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/strutils.py", line 134, in safe_decode 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher raise TypeError("%s can't be decoded" % type(text)) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher TypeError: <type 'NoneType'> can't be decoded 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher [Impact] * Ensure attching already attached volume to second instance does not interfere with attached instance volume record. [Test Case] * Create cinder volume vol1 and two instances vm1 and vm2 * Attach vol1 to vm1 and check that attach was successful by doing: - cinder list - nova show <vm1> e.g. http://paste.ubuntu.com/12314443/ * Attach vol1 to vm2 and check that attach fails and, crucially, that the first attach is unaffected (as above). You also check the Nova db as follows: select * from block_device_mapping where source_type='volume' and \ (instance_uuid='<vm1>' or instance_uuid='<vm2>'); from which you would expect e.g. http://paste.ubuntu.com/12314416/ which shows that vol1 is attached to vm1 and vm2 attach failed. * finally detach vol1 from vm1 and ensure that it succeeds. [Regression Potential] * none ---- ---- ---- ---- nova assumes there is only ever one bdm per volume. When an attach is initiated a new bdm is created, if the attach fails a bdm for the volume is deleted however it is not necessarily the one that was just created. The following steps show how a volume can get stuck detaching because of this. $ nova list c+--------------------------------------+--------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------+--------+------------+-------------+------------------+ | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | - | Running | private=10.0.0.2 | +--------------------------------------+--------+--------+------------+-------------+------------------+ $ cinder list +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 | 1 | lvm1 | false | | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | +----------+--------------------------------------+ $ cinder list +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 | 1 | lvm1 | false | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d) $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 $ cinder list +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | detaching | test10 | 1 | lvm1 | false | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ 2014-07-29 14:47:13.952 ERROR oslo.messaging.rpc.dispatcher [req-134dfd17-14da-4de0-93fc-5d8d7bbf65a5 admin admin] Exception during message handling: <type 'NoneType'> can't be decoded 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 406, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 88, in wrapped 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher payload) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 71, in wrapped 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 291, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher pass 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 277, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 319, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info()) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 307, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4363, in detach_volume 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher self._detach_volume(context, instance, bdm) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4309, in _detach_volume 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher connection_info = jsonutils.loads(bdm.connection_info) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/jsonutils.py", line 176, in loads 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return json.loads(strutils.safe_decode(s, encoding), **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/strutils.py", line 134, in safe_decode 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher raise TypeError("%s can't be decoded" % type(text)) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher TypeError: <type 'NoneType'> can't be decoded 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher
2015-09-08 15:22:15 Edward Hope-Morley nova (Ubuntu): status In Progress Fix Released
2015-09-08 15:23:22 Edward Hope-Morley description [Impact] * Ensure attching already attached volume to second instance does not interfere with attached instance volume record. [Test Case] * Create cinder volume vol1 and two instances vm1 and vm2 * Attach vol1 to vm1 and check that attach was successful by doing: - cinder list - nova show <vm1> e.g. http://paste.ubuntu.com/12314443/ * Attach vol1 to vm2 and check that attach fails and, crucially, that the first attach is unaffected (as above). You also check the Nova db as follows: select * from block_device_mapping where source_type='volume' and \ (instance_uuid='<vm1>' or instance_uuid='<vm2>'); from which you would expect e.g. http://paste.ubuntu.com/12314416/ which shows that vol1 is attached to vm1 and vm2 attach failed. * finally detach vol1 from vm1 and ensure that it succeeds. [Regression Potential] * none ---- ---- ---- ---- nova assumes there is only ever one bdm per volume. When an attach is initiated a new bdm is created, if the attach fails a bdm for the volume is deleted however it is not necessarily the one that was just created. The following steps show how a volume can get stuck detaching because of this. $ nova list c+--------------------------------------+--------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------+--------+------------+-------------+------------------+ | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | - | Running | private=10.0.0.2 | +--------------------------------------+--------+--------+------------+-------------+------------------+ $ cinder list +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 | 1 | lvm1 | false | | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | +----------+--------------------------------------+ $ cinder list +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 | 1 | lvm1 | false | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d) $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 $ cinder list +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | detaching | test10 | 1 | lvm1 | false | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ 2014-07-29 14:47:13.952 ERROR oslo.messaging.rpc.dispatcher [req-134dfd17-14da-4de0-93fc-5d8d7bbf65a5 admin admin] Exception during message handling: <type 'NoneType'> can't be decoded 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 406, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 88, in wrapped 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher payload) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 71, in wrapped 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 291, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher pass 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 277, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 319, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info()) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 307, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4363, in detach_volume 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher self._detach_volume(context, instance, bdm) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4309, in _detach_volume 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher connection_info = jsonutils.loads(bdm.connection_info) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/jsonutils.py", line 176, in loads 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return json.loads(strutils.safe_decode(s, encoding), **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/strutils.py", line 134, in safe_decode 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher raise TypeError("%s can't be decoded" % type(text)) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher TypeError: <type 'NoneType'> can't be decoded 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher [Impact]  * Ensure attching already attached volume to second instance does not    interfere with attached instance volume record. [Test Case]  * Create cinder volume vol1 and two instances vm1 and vm2  * Attach vol1 to vm1 and check that attach was successful by doing:    - cinder list    - nova show <vm1>    e.g. http://paste.ubuntu.com/12314443/  * Attach vol1 to vm2 and check that attach fails and, crucially, that the    first attach is unaffected (as above). You can also check the Nova db as    follows:    select * from block_device_mapping where source_type='volume' and \        (instance_uuid='<vm1>' or instance_uuid='<vm2>');    from which you would expect e.g. http://paste.ubuntu.com/12314416/ which    shows that vol1 is attached to vm1 and vm2 attach failed.  * finally detach vol1 from vm1 and ensure that it succeeds. [Regression Potential]  * none ---- ---- ---- ---- nova assumes there is only ever one bdm per volume. When an attach is initiated a new bdm is created, if the attach fails a bdm for the volume is deleted however it is not necessarily the one that was just created. The following steps show how a volume can get stuck detaching because of this. $ nova list c+--------------------------------------+--------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------+--------+------------+-------------+------------------+ | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | - | Running | private=10.0.0.2 | +--------------------------------------+--------+--------+------------+-------------+------------------+ $ cinder list +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 | 1 | lvm1 | false | | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | +----------+--------------------------------------+ $ cinder list +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 | 1 | lvm1 | false | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+ $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d) $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4 $ cinder list +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | detaching | test10 | 1 | lvm1 | false | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+ 2014-07-29 14:47:13.952 ERROR oslo.messaging.rpc.dispatcher [req-134dfd17-14da-4de0-93fc-5d8d7bbf65a5 admin admin] Exception during message handling: <type 'NoneType'> can't be decoded 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 406, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 88, in wrapped 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher payload) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 71, in wrapped 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 291, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher pass 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 277, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 319, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info()) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 307, in decorated_function 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4363, in detach_volume 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher self._detach_volume(context, instance, bdm) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4309, in _detach_volume 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher connection_info = jsonutils.loads(bdm.connection_info) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/jsonutils.py", line 176, in loads 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return json.loads(strutils.safe_decode(s, encoding), **kwargs) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/nova/nova/openstack/common/strutils.py", line 134, in safe_decode 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher raise TypeError("%s can't be decoded" % type(text)) 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher TypeError: <type 'NoneType'> can't be decoded 2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher
2015-09-16 15:37:15 Chris J Arges nova (Ubuntu Trusty): status In Progress Fix Committed
2015-09-16 15:37:18 Chris J Arges bug added subscriber Ubuntu Stable Release Updates Team
2015-09-16 15:37:20 Chris J Arges bug added subscriber SRU Verification
2015-09-16 15:37:29 Chris J Arges tags icehouse-backport-potential icehouse-backport-potential verification-needed
2015-09-17 11:23:02 Edward Hope-Morley tags icehouse-backport-potential verification-needed icehouse-backport-potential verification-done
2015-09-23 18:59:02 Chris J Arges removed subscriber Ubuntu Stable Release Updates Team
2015-09-23 19:09:05 Launchpad Janitor nova (Ubuntu Trusty): status Fix Committed Fix Released
2016-01-14 20:47:33 Matt Riedemann tags icehouse-backport-potential verification-done verification-done