TEMPEST test case failed "tempest.api.compute.volumes.test_attach_volume.AttachVolumeShelveTestJSON.test_detach_volume_shelved_or_offload_server"

Bug #1738254 reported by Vivek Soni
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
High
Matt Riedemann

Bug Description

Analysis :- terminate_connection calls with an empty connector dictionary and failed with below mentioned error

Log :-
---------------------------------------
Dec 13 01:07:18.228501 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager [None req-68489563-a6c8-4507-82b0-df098b6af495 tempest-AttachVolumeShelveTestJSON-1044675265 None] Terminate volume connection failed: 'host': KeyError: 'host'
Dec 13 01:07:18.228869 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager Traceback (most recent call last):
Dec 13 01:07:18.229392 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager File "/opt/stack/new/cinder/cinder/volume/manager.py", line 4432, in _connection_terminate
Dec 13 01:07:18.229723 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager force=force)
Dec 13 01:07:18.230094 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager File "/opt/stack/new/cinder/cinder/utils.py", line 883, in trace_logging_wrapper
Dec 13 01:07:18.230393 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager result = f(*args, **kwargs)
Dec 13 01:07:18.230581 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager File "/opt/stack/new/cinder/cinder/zonemanager/utils.py", line 104, in decorator
Dec 13 01:07:18.230761 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager conn_info = terminate_connection(self, *args, **kwargs)
Dec 13 01:07:18.230939 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager File "/opt/stack/new/cinder/cinder/volume/drivers/hpe/hpe_3par_fc.py", line 212, in terminate_connection
Dec 13 01:07:18.231202 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager hostname = common._safe_hostname(connector['host'])
Dec 13 01:07:18.231501 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager KeyError: 'host'
Dec 13 01:07:18.231813 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR cinder.volume.manager
Dec 13 01:07:18.332173 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server [None req-68489563-a6c8-4507-82b0-df098b6af495 tempest-AttachVolumeShelveTestJSON-1044675265 None] Exception during message handling: VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Terminate volume connection failed: 'host'
Dec 13 01:07:18.332686 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Dec 13 01:07:18.333009 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
Dec 13 01:07:18.333325 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
Dec 13 01:07:18.333642 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
Dec 13 01:07:18.334018 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
Dec 13 01:07:18.334212 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
Dec 13 01:07:18.334398 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
Dec 13 01:07:18.334618 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server File "/opt/stack/new/cinder/cinder/volume/manager.py", line 4468, in attachment_delete
Dec 13 01:07:18.334832 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server self._do_attachment_delete(context, vref, attachment_ref)
Dec 13 01:07:18.335020 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server File "/opt/stack/new/cinder/cinder/volume/manager.py", line 4475, in _do_attachment_delete
Dec 13 01:07:18.335216 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server attachment)
Dec 13 01:07:18.335396 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server File "/opt/stack/new/cinder/cinder/volume/manager.py", line 4440, in _connection_terminate
Dec 13 01:07:18.335582 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server raise exception.VolumeBackendAPIException(data=err_msg)
Dec 13 01:07:18.335810 d-p-c-fc-p227fc-879394 cinder-volume[5580]: ERROR oslo_messaging.rpc.server VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Terminate volume connection failed: 'host'

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/528028

Matt Riedemann (mriedem)
Changed in cinder:
status: New → In Progress
importance: Undecided → High
assignee: nobody → Matt Riedemann (mriedem)
Revision history for this message
John Griffith (john-griffith) wrote :

Yeah, this worked when we enforced the flow and a volume that was shelved was changed to reserved, because then the API would catch this case here:
https://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L2093

But that was changed late in the Nova Attach patch to match the old status settings for shelve, so that now a non-attached volume is still marked as in-use. The other option would be to use attach-status instead of volume status for this check, but then again now we lie there as well by callin g attachment-complete on a non attached volume.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on cinder (master)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.openstack.org/528028
Reason: smcginnis said the drivers that are failing on this need to fix it in the driver.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/528028
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=1bcab6a48de991dd0e029c6acd5527643aa457c9
Submitter: Zuul
Branch: master

commit 1bcab6a48de991dd0e029c6acd5527643aa457c9
Author: Matt Riedemann <email address hidden>
Date: Thu Dec 14 13:20:32 2017 -0500

    Don't call driver.terminate_connection if there is no connector

    It's possible to attach a volume to a shelved offloaded instance,
    which is not on a compute host - which means no host connector,
    and then detach that volume while the instance is unshelved. There
    is a test in Tempest just for this scenario.

    When nova calls attachment_delete on detach, the cinder volume
    manager is calling the driver.terminate_connection method regardless
    of there being a host connector, and some drivers blow up on that.

    This used to "work" with the pre-3.27 attach flow in nova because
    in the detach case where there was no connector, nova would simply
    not call os-terminate_connection, but with the new 3.27+ flow for
    attach, nova only calls attachment_delete regardless of there
    being a connector.

    This change simply checks to see if the attachment has a connector
    and if not, it avoids the call to the driver to terminate a connection
    which does not exist.

    Change-Id: I496e45608798a6a5d9606f9594feeb8b60855d1a
    Closes-Bug: #1738254

Changed in cinder:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 12.0.0.0b3

This issue was fixed in the openstack/cinder 12.0.0.0b3 development milestone.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.