unhandled return from linuxscsi.find_multipath_device in disconnect_volume

Bug #1424968 reported by Stephen Marron
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Low
Davanum Srinivas (DIMS)

Bug Description

disconnect_volume in class LibvirtFibreChannelVolumeDriver calls linuxscsi.find_multipath_device but does not handle the potential return of None (eg if the multipath device does not exist)

https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/volume.py?id=1469c8e14267e27ecc6ced29c91dc1506ce26633#n992

https://git.openstack.org/cgit/openstack/nova/tree/nova/storage/linuxscsi.py?id=1469c8e14267e27ecc6ced29c91dc1506ce26633#n91

Adding the following to disconnect_volume resolved the issue for me:

         if 'multipath_id' in connection_info['data']:
             multipath_id = connection_info['data']['multipath_id']
             mdev_info = linuxscsi.find_multipath_device(multipath_id)
+ if mdev_info is None:
+ return
             devices = mdev_info['devices']
             LOG.debug("devices to remove = %s", devices)

Nova Logs from Juno

2015-02-24 17:48:01.140 32199 AUDIT nova.service [-] Starting compute node (version 2014.2.1-1.el7.centos)
2015-02-24 17:48:01.264 32199 INFO nova.compute.manager [-] [instance: 210f6191-a646-4bba-96ef-80df63d2df01] Deleting instance as its host (Compute-02.domain) is not equal to our host (Compute-01.domain).
2015-02-24 17:48:01.826 32199 INFO nova.virt.libvirt.driver [-] [instance: 210f6191-a646-4bba-96ef-80df63d2df01] Instance destroyed successfully.
2015-02-24 17:48:01.928 32199 ERROR nova.openstack.common.threadgroup [-] 'NoneType' object has no attribute '__getitem__'
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup x.wait()
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup return self.thread.wait()
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup return self._exit_event.wait()
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup return hubs.get_hub().switch()
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup return self.greenlet.switch()
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup result = function(*args, **kwargs)
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup service.start()
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup self.manager.init_host()
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1137, in init_host
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup self._destroy_evacuated_instances(context)
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 768, in _destroy_evacuated_instances
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup bdi, destroy_disks)
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1056, in destroy
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup destroy_disks, migrate_data)
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1164, in cleanup
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup instance=instance)
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup six.reraise(self.type_, self.value, self.tb)
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1153, in cleanup
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup self._disconnect_volume(connection_info, disk_dev)
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1334, in _disconnect_volume
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup return driver.disconnect_volume(connection_info, disk_dev)
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 272, in inner
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup return f(*args, **kwargs)
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume.py", line 1043, in disconnect_volume
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup devices = mdev_info['devices']
2015-02-24 17:48:01.928 32199 TRACE nova.openstack.common.threadgroup TypeError: 'NoneType' object has no attribute '__getitem__'

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/159626

Changed in nova:
assignee: nobody → Davanum Srinivas (DIMS) (dims-v)
status: New → In Progress
Changed in nova:
importance: Undecided → Low
Changed in nova:
assignee: Davanum Srinivas (DIMS) (dims-v) → melanie witt (melwitt)
Changed in nova:
assignee: melanie witt (melwitt) → Davanum Srinivas (DIMS) (dims-v)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/159626
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=144dd4f02b3ee9b79e64de830c31984f86ec3001
Submitter: Jenkins
Branch: master

commit 144dd4f02b3ee9b79e64de830c31984f86ec3001
Author: Davanum Srinivas <email address hidden>
Date: Fri Feb 27 08:21:50 2015 -0500

    Fix disconnect_volume issue when find_multipath_device returns None

    During disconnect_volume in LibvirtFibreChannelVolumeDriver there is
    a chance that find_multipath_device behaves badly and returns None.
    When this happens we end up with a failure as we don't check the
    return value from find_multipath_device.

    Co-Authored-By: Melanie Witt <email address hidden>
    Closes-Bug: #1424968
    Change-Id: I5a5c7e26b70df237c7446efe8f99ce2304c41ab4

Changed in nova:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in nova:
milestone: none → liberty-1
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: liberty-1 → 12.0.0
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.