NetApp Eseries fails to attach lun if lun is already mapped

Bug #1310659 reported by Andrew Kerr
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
Undecided
Navneet
Icehouse
Fix Released
Undecided
Unassigned

Bug Description

In some situations it may be possible for the lun on an eseries controller to be mapped, but for cinder to not know about it. In this situation, if you then attempt certain operations in cinder (such as upload-to-image or create volume from image), cinder will attempt to attach the lun to the cinder node. This will fail because the eseries controller returns back an error that the lun is already mapped.

2014-04-18 12:30:04.424 DEBUG cinder.volume.drivers.netapp.eseries.client [req-fca76af9-20d8-407a-9641-416409264944 020beefaca554b97b895fb11fd176177 d1155ac0ccaf4f879b8817151d6a3ee8] Invoking rest with method: POST, path: /storage-syste
ms/{system-id}/volume-mappings, data: {'mappableObjectId': u'0200000060080E500023BB340000C7C65350FAAF', 'targetId': u'8400000060080E500023C734003024D55350F8AA', 'lun': 1}, use_system: True, timeout: None, verify: False, kwargs: {}. from
 (pid=30195) _invoke /opt/stack/cinder/cinder/volume/drivers/netapp/eseries/client.py:123
2014-04-18 12:30:04.426 DEBUG urllib3.connectionpool [req-fca76af9-20d8-407a-9641-416409264944 020beefaca554b97b895fb11fd176177 d1155ac0ccaf4f879b8817151d6a3ee8] Setting read timeout to None from (pid=30195) _make_request /usr/lib/pytho
n2.7/dist-packages/urllib3/connectionpool.py:375
2014-04-18 12:30:04.809 DEBUG urllib3.connectionpool [req-fca76af9-20d8-407a-9641-416409264944 020beefaca554b97b895fb11fd176177 d1155ac0ccaf4f879b8817151d6a3ee8] "POST /devmgr/v2/storage-systems/f79b215b-b502-43b7-800c-9b6a08c7086b/volu
me-mappings HTTP/1.1" 422 None from (pid=30195) _make_request /usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415
2014-04-18 12:30:04.812 ERROR cinder.volume.driver [req-fca76af9-20d8-407a-9641-416409264944 020beefaca554b97b895fb11fd176177 d1155ac0ccaf4f879b8817151d6a3ee8] Unable to fetch connection information from backend: Response error - {
  "errorMessage" : "The operation cannot complete because the volume you are trying to map is already accessible by a host group or host in this partition.",
  "localizedMessage" : "The operation cannot complete because the volume you are trying to map is already accessible by a host group or host in this partition.",
  "retcode" : "105",
  "codeType" : "symbol"
}.
2014-04-18 12:30:04.812 DEBUG cinder.volume.driver [req-fca76af9-20d8-407a-9641-416409264944 020beefaca554b97b895fb11fd176177 d1155ac0ccaf4f879b8817151d6a3ee8] Cleaning up failed connect initialization. from (pid=30195) _attach_volume /
opt/stack/cinder/cinder/volume/driver.py:399
2014-04-18 12:30:04.862 ERROR oslo.messaging.rpc.dispatcher [req-fca76af9-20d8-407a-9641-416409264944 020beefaca554b97b895fb11fd176177 d1155ac0ccaf4f879b8817151d6a3ee8] Exception during message handling: Bad or unexpected response from
the storage volume backend API: Unable to fetch connection information from backend: Response error - {
  "errorMessage" : "The operation cannot complete because the volume you are trying to map is already accessible by a host group or host in this partition.",
  "localizedMessage" : "The operation cannot complete because the volume you are trying to map is already accessible by a host group or host in this partition.",
  "retcode" : "105",
  "codeType" : "symbol"
}.
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 719, in copy_volume_to_image
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher payload['message'] = unicode(error)
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/openstack/common/excutils.py", line 68, in __exit__
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 713, in copy_volume_to_image
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher image_meta)
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/driver.py", line 355, in copy_volume_to_image
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher attach_info = self._attach_volume(context, volume, properties)
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/driver.py", line 406, in _attach_volume
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher raise exception.VolumeBackendAPIException(data=err_msg)
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Unable to fetch connection information from backend: Response error - {
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher "errorMessage" : "The operation cannot complete because the volume you are trying to map is already accessible by a host group or host in this partition.",
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher "localizedMessage" : "The operation cannot complete because the volume you are trying to map is already accessible by a host group or host in this partition.",
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher "retcode" : "105",
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher "codeType" : "symbol"
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher }.
2014-04-18 12:30:04.862 TRACE oslo.messaging.rpc.dispatcher
2014-04-18 12:30:04.865 ERROR oslo.messaging._drivers.common [req-fca76af9-20d8-407a-9641-416409264944 020beefaca554b97b895fb11fd176177 d1155ac0ccaf4f879b8817151d6a3ee8] Returning exception Bad or unexpected response from the storage volume backend API: Unable to fetch connection information from backend: Response error - {
  "errorMessage" : "The operation cannot complete because the volume you are trying to map is already accessible by a host group or host in this partition.",
  "localizedMessage" : "The operation cannot complete because the volume you are trying to map is already accessible by a host group or host in this partition.",
  "retcode" : "105",
  "codeType" : "symbol"
}. to caller

Revision history for this message
Navneet (singn) wrote :

Though there are proper ways to handle detaching/terminating connection for any operation which fails after the volume has been attached. This one comes when the attach operation itself fails but after the volume has been mapped to host and hence unmapping is not handled in the framework itself. The bug was reproduced by misconfiguring or turning off open-iscsi on ubuntu and then trying copy image/volume * operations.

Changed in cinder:
assignee: nobody → Navneet (singn)
Revision history for this message
Openstack Gerrit (openstack-gerrit) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/89482

Changed in cinder:
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/89482
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=dcec1b84f4d3462fc408f4152f4c89b8df79d629
Submitter: Jenkins
Branch: master

commit dcec1b84f4d3462fc408f4152f4c89b8df79d629
Author: Navneet Singh <email address hidden>
Date: Thu Feb 27 20:50:52 2014 +0530

    NetApp fix attach fail for already mapped volume

    This patch fixes the error raised during mapping of a volume
    to the host during attach operation if the volume is already
    mapped to the host.

    Change-Id: I4f711e7ac18eea0dfddab65fd85a3601fe967a88
    Closes-bug: #1310659

Changed in cinder:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in cinder:
milestone: none → juno-1
status: Fix Committed → Fix Released
Tom Barron (tpb)
tags: added: icehouse-backport-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (stable/icehouse)

Fix proposed to branch: stable/icehouse
Review: https://review.openstack.org/127982

Thierry Carrez (ttx)
Changed in cinder:
milestone: juno-1 → 2014.2
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (stable/icehouse)

Reviewed: https://review.openstack.org/127982
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=b05498dec2ba0c987f13f111487522324d2b3281
Submitter: Jenkins
Branch: stable/icehouse

commit b05498dec2ba0c987f13f111487522324d2b3281
Author: Navneet Singh <email address hidden>
Date: Thu Feb 27 20:50:52 2014 +0530

    NetApp fix attach fail for already mapped volume

    This patch fixes the error raised during mapping of a volume
    to the host during attach operation if the volume is already
    mapped to the host.

    Change-Id: I4f711e7ac18eea0dfddab65fd85a3601fe967a88
    Closes-bug: #1310659
    (cherry picked from commit dcec1b84f4d3462fc408f4152f4c89b8df79d629)

tags: added: in-stable-icehouse
Eric Harney (eharney)
tags: removed: icehouse-backport-potential
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.