3PAR driver picks wrong port when not in multipath mode.

Bug #1809249 reported by Pedro Rubio
24
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
Low
Raghavendra Tilay

Bug Description

Summary:

 Unable to create a volume from an image.

Note: Using patch https://bugs.launchpad.net/os-brick/+bug/1687607. Before os-brick was using local wwnn instead of target wwpn.

       When creating a new volume from image volume is exported to the controller but it fails to find it:
<snip>
2018-12-18 17:21:20.398 4004 DEBUG os_brick.initiator.linuxfc [-] Scanning host host1 (wwnn: 51402ec001c2fdb9, c: -, t: -, l: 1) rescan_hosts /opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/os_brick/initiator/linuxfc.py:93
2018-12-18 17:21:20.399 26689 DEBUG oslo.privsep.daemon [-] privsep: request[140658143329328]: (3, 'os_brick.privileged.rootwrap.execute_root', ('tee', '-a', u'/sys/class/scsi_host/host1/scan'), {'process_input': '- - 1'}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2018-12-18 17:21:20.401 26689 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tee -a /sys/class/scsi_host/host1/scan execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:367
2018-12-18 17:21:20.411 26689 DEBUG oslo_concurrency.processutils [-] CMD "tee -a /sys/class/scsi_host/host1/scan" returned: 0 in 0.010s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:404
2018-12-18 17:21:20.411 26689 DEBUG oslo.privsep.daemon [-] privsep: reply[140658143329328]: (4, ('- - 1', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2018-12-18 17:21:20.413 4004 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -Gil "20010002AC01AF2B" /sys/class/fc_transport/target2:*/port_name execute /opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:367
2018-12-18 17:21:20.432 4004 DEBUG oslo_concurrency.processutils [-] CMD "grep -Gil "20010002AC01AF2B" /sys/class/fc_transport/target2:*/port_name" returned: 1 in 0.020s execute /opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:404
2018-12-18 17:21:20.433 4004 DEBUG oslo_concurrency.processutils [-] u'grep -Gil "20010002AC01AF2B" /sys/class/fc_transport/target2:*/port_name' failed. Not Retrying. execute /opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:452
2018-12-18 17:21:20.434 4004 DEBUG os_brick.initiator.linuxfc [-] Could not get HBA channel and SCSI target ID, path: /sys/class/fc_transport/target2:*, reason: Unexpected error while running command.
Command: grep -Gil "20010002AC01AF2B" /sys/class/fc_transport/target2:*/port_name
Exit code: 1
Stdout: u''
Stderr: u'' _get_hba_channel_scsi_target /opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/os_brick/initiator/linuxfc.py:76
2018-12-18 17:21:20.435 4004 DEBUG os_brick.initiator.linuxfc [-] Scanning host host2 (wwnn: 51402ec001c2fdbb, c: -, t: -, l: 1) rescan_hosts /opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/os_brick/initiator/linuxfc.py:93
2018-12-18 17:21:20.435 26689 DEBUG oslo.privsep.daemon [-] privsep: request[140658143329328]: (3, 'os_brick.privileged.rootwrap.execute_root', ('tee', '-a', u'/sys/class/scsi_host/host2/scan'), {'process_input': '- - 1'}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2018-12-18 17:21:20.436 26689 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tee -a /sys/class/scsi_host/host2/scan execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:367
2018-12-18 17:21:20.444 26689 DEBUG oslo_concurrency.processutils [-] CMD "tee -a /sys/class/scsi_host/host2/scan" returned: 0 in 0.008s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:404
2018-12-18 17:21:20.444 26689 DEBUG oslo.privsep.daemon [-] privsep: reply[140658143329328]: (4, ('- - 1', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2018-12-18 17:21:22.366 4004 DEBUG os_brick.initiator.connectors.fibre_channel [-] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:08:00.0-fc-0x20010002ac01af2b-lun-1 _wait_for_device_discovery /opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/os_brick/initiator/connectors/fibre_channel.py:145
2018-12-18 17:21:22.367 4004 DEBUG os_brick.initiator.connectors.fibre_channel [-] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:08:00.1-fc-0x20010002ac01af2b-lun-1 _wait_for_device_discovery /opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/os_brick/initiator/connectors/fibre_channel.py:145
2018-12-18 17:21:22.367 4004 ERROR os_brick.initiator.connectors.fibre_channel [-] Fibre Channel volume device not found.
2018-12-18 17:21:22.368 4004 ERROR oslo.service.loopingcall [-] Fixed interval looping call 'os_brick.initiator.connectors.fibre_channel._wait_for_device_discovery' failed: NoFibreChannelVolumeDeviceFound: Unable to find a Fibre Channel volume device.
2018-12-18 17:21:22.368 4004 ERROR oslo.service.loopingcall Traceback (most recent call last):
2018-12-18 17:21:22.368 4004 ERROR oslo.service.loopingcall File "/opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 137, in _run_loop
2018-12-18 17:21:22.368 4004 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw)
2018-12-18 17:21:22.368 4004 ERROR oslo.service.loopingcall File "/opt/stack/venv/cinder-20180426T230359Z/lib/python2.7/site-packages/os_brick/initiator/connectors/fibre_channel.py", line 155, in _wait_for_device_discovery
2018-12-18 17:21:22.368 4004 ERROR oslo.service.loopingcall raise exception.NoFibreChannelVolumeDeviceFound()
2018-12-18 17:21:22.368 4004 ERROR oslo.service.loopingcall NoFibreChannelVolumeDeviceFound: Unable to find a Fibre Channel volume device.
2018-12-18 17:21:22.368 4004 ERROR oslo.service.loopingcall
<snip>

       Note that os-brick code is looking for target 20010002AC01AF2B but it´s not an available one:

SDP1-helion-cp1-c1-m1-mgmt:~ # cat /sys/class/fc_transport/target1:0:0/port_name
0x21210002ac01af2b
SDP1-helion-cp1-c1-m1-mgmt:~ # cat /sys/class/fc_transport/target1:0:1/port_name
0x20210002ac01af2b
SDP1-helion-cp1-c1-m1-mgmt:~ # cat /sys/class/fc_transport/target2:0:0/port_name
0x21220002ac01af2b
SDP1-helion-cp1-c1-m1-mgmt:~ # cat /sys/class/fc_transport/target2:0:1/port_name
0x20220002ac01af2b

       On 3PAR:
SP1-3PAR8200-01 cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol Label Partner FailoverState
0:0:1 target ready 2FF70002AC01AF2B 20010002AC01AF2B host FC - 1:0:1 none
0:0:2 target ready 2FF70002AC01AF2B 20020002AC01AF2B host FC - 1:0:2 none
0:1:1 initiator ready 50002ACFF701AF2B 50002AC01101AF2B disk SAS DP-1 - -
0:1:2 initiator ready 50002ACFF701AF2B 50002AC01201AF2B disk SAS DP-2 - -
0:2:1 target ready 2FF70002AC01AF2B 20210002AC01AF2B host FC - 1:2:1 none <<<< included in zoning
0:2:2 target ready 2FF70002AC01AF2B 20220002AC01AF2B host FC - 1:2:2 none <<<< included in zoning
0:2:3 target ready 2FF70002AC01AF2B 20230002AC01AF2B host FC - 1:2:3 none
0:2:4 target loss_sync 2FF70002AC01AF2B 20240002AC01AF2B free FC - 1:2:4 none
0:3:1 peer ready - 480FCFA3C3E5 rcip IP IP0 - -
1:0:1 target ready 2FF70002AC01AF2B 21010002AC01AF2B host FC - 0:0:1 none
1:0:2 target ready 2FF70002AC01AF2B 21020002AC01AF2B host FC - 0:0:2 none
1:1:1 initiator ready 50002ACFF701AF2B 50002AC11101AF2B disk SAS DP-1 - -
1:1:2 initiator ready 50002ACFF701AF2B 50002AC11201AF2B disk SAS DP-2 - -
1:2:1 target ready 2FF70002AC01AF2B 21210002AC01AF2B host FC - 0:2:1 none <<<< included in zoning
1:2:2 target ready 2FF70002AC01AF2B 21220002AC01AF2B host FC - 0:2:2 none <<<< included in zoning
1:2:3 target loss_sync 2FF70002AC01AF2B 21230002AC01AF2B free FC - 0:2:3 none
1:2:4 target ready 2FF70002AC01AF2B 21240002AC01AF2B host FC - 0:2:4 none
1:3:1 peer ready - 480FCFA3BD71 rcip IP IP1 - -
-------------------------------------------------------------------------------------------------------
   18

       As you can see not all target ports are visible from controller but cinder is looking for the “first” one: 0:0:1 ("20010002AC01AF2B"), received using hpe3parclient:

<snip>
2018-12-18 17:21:15.445 4004 DEBUG hpe3parclient.http [req-0c8eb5ed-c7a5-4040-b211-f7ecc9f1cbb2 c8b5bfff66a84e0194695f6362bd773f e1ffd2f7872641b79e974c10ffc3e367 - default default] RESP BODY:{"total":18,"members":[{"portPos":{"node":0,"slot":0,"cardPort":1},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"20010002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":1,"slot":0,"cardPort":1},"failoverState":1,"device":["SP1R08C01LHVMESX99","SP1R08C01LHVMESX99","SP1R07C09LHVMESX97","SP1R07C10LHVMESX98","SP1R07C09LHVMESX01","SP1R07C10LHVMESX15","SP1R07C10LHVMESX13","SP1R07C10LHVMESX16","SP1R07C10LHVMESX14","SP1R07C10LHVMESX12","SP1R07C09LHVMESX06","SP1R07C09LHVMESX04","SP1R07C09LHVMESX02","SP1R07C09LHVMESX05","SP1R07C09LHVMESX03","sp1r07c09lhvmesx07","SP1R07C10LHVMESX18","SP1R07C09lHVMESX08","SP1R07C10LHVMESX23","SP1R07C09LHVMESX09","SP1R07C10LHVMESX20","SP1R07C09LHVMESX11","SP1R07C10LHVMESX19","SP1R07C10LHVMESX24","SP1R07C10LHVMESX25","SP1R07C09LHVMESX10","SP1R07C10LHVMESX21","SP1R07C10LHVMESX26","SP1R07C09LHVMESX22","SP1R07C09LHVMESX27","SP1R07C09LHVMESX28","SP1R08C01LHVMESX29","SP1R08C01LHVMESX30","SP1R08C01LHVMESX31","SP1R08C01LHVMESX32","SP1R08C01LHVMESX33","SP1R08C01LHVMESX34","SP1R08C01LHVMESX35","SP1R08C01LHVMESX36","SP1R08C01LHVMESX37","SP1R08C01LHVMESX38","SP1R07C10LPLXHCP01","SP1R07C10LPLXHCP02"]},{"portPos":{"node":0,"slot":0,"cardPort":2},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"20020002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":1,"slot":0,"cardPort":2},"failoverState":1,"device":["SP1R08C01LHVMESX99","SP1R07C09LHVMESX97","SP1R07C10LHVMESX98","SP1R07C09LHVMESX01","SP1R07C10LHVMESX15","SP1R07C10LHVMESX13","SP1R07C10LHVMESX16","SP1R07C10LHVMESX14","SP1R07C10LHVMESX12","SP1R07C09LHVMESX06","SP1R07C09LHVMESX04","SP1R07C09LHVMESX02","SP1R07C09LHVMESX05","SP1R07C09LHVMESX03","sp1r07c09lhvmesx07","SP1R07C10LHVMESX18","SP1R07C09lHVMESX08","SP1R07C10LHVMESX23","SP1R07C09LHVMESX09","SP1R07C10LHVMESX20","SP1R07C09LHVMESX11","SP1R07C10LHVMESX19","SP1R07C10LHVMESX24","SP1R07C10LHVMESX25","SP1R07C09LHVMESX10","SP1R07C10LHVMESX21","SP1R07C10LHVMESX26","SP1R07C09LHVMESX22","SP1R07C09LHVMESX27","SP1R07C09LHVMESX28","SP1R08C01LHVMESX29","SP1R08C01LHVMESX30","SP1R08C01LHVMESX31","SP1R08C01LHVMESX32","SP1R08C01LHVMESX33","SP1R08C01LHVMESX34","SP1R08C01LHVMESX35","SP1R08C01LHVMESX36","SP1R08C01LHVMESX37","SP1R08C01LHVMESX38","SP1R07C10LPLXHCP01","SP1R07C10LPLXHCP02"]},{"portPos":{"node":0,"slot":1,"cardPort":1},"mode":3,"linkState":4,"nodeWWN":"50002ACFF701AF2B","portWWN":"50002AC01101AF2B","type":2,"protocol":5,"label":"DP-1","device":["cage0","cage2","cage3"]},{"portPos":{"node":0,"slot":1,"cardPort":2},"mode":3,"linkState":4,"nodeWWN":"50002ACFF701AF2B","portWWN":"50002AC01201AF2B","type":2,"protocol":5,"label":"DP-2","device":["cage1","cage4","cage5"]},{"portPos":{"node":0,"slot":2,"cardPort":1},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"20210002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":1,"slot":2,"cardPort":1},"failoverState":1,"device":["SP1R07C10LHVMESX15","SP1R07C10LHVMESX16","SP1R07C10LHVMESX14","SP1R07C10LHVMESX12","SP1R08C03LPLXHCO03","SDP-helion-cp1-comp0001-mgmt","SDP-helion-cp1-comp0003-mgmt","SDP-helion-cp1-comp0002-mgmt","sp1r07c09lhvmesx07","SP1R07C10LHVMESX18","SP1R07C09lHVMESX08","SP1R07C10LHVMESX23","SP1R07C09LHVMESX09","SP1R07C10LHVMESX20","SP1R07C09LHVMESX11","SP1R07C10LHVMESX19","SP1R07C10LHVMESX24","SP1R07C10LHVMESX25","SP1R07C09LHVMESX10","SP1R07C10LHVMESX21","SP1R07C10LHVMESX26","SP1R07C09LHVMESX22","SP1R07C09LHVMESX27","SP1R07C09LHVMESX28","SP1R08C01LHVMESX29","SP1R08C01LHVMESX30","SP1R08C01LHVMESX31","SP1R08C01LHVMESX32","SP1R08C01LHVMESX33","SP1R08C01LHVMESX34","SP1R08C01LHVMESX35","SP1R08C01LHVMESX36","SP1R08C01LHVMESX37","SP1R08C01LHVMESX38","SP1R08C02LPLXHCT02","SP1R08C03LPLXHCT03","SP1R07C10LPLXHCP01","SP1R07C10LPLXHCP02","SDP1-helion-cp1-c1-m1-mgmt"]},{"portPos":{"node":0,"slot":2,"cardPort":2},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"20220002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":1,"slot":2,"cardPort":2},"failoverState":1,"device":["SP1R07C10LHVMESX15","SP1R07C10LHVMESX16","SP1R07C10LHVMESX14","SP1R07C10LHVMESX12","SP1R08C02LPLXHCO02","SDP-helion-cp1-comp0001-mgmt","SDP-helion-cp1-comp0003-mgmt","SDP-helion-cp1-comp0002-mgmt","sp1r07c09lhvmesx07","SP1R07C10LHVMESX18","SP1R07C09lHVMESX08","SP1R07C10LHVMESX23","SP1R07C09LHVMESX09","SP1R07C10LHVMESX20","SP1R07C09LHVMESX11","SP1R07C10LHVMESX19","SP1R07C10LHVMESX24","SP1R07C10LHVMESX25","SP1R07C09LHVMESX10","SP1R07C10LHVMESX21","SP1R07C10LHVMESX26","SP1R07C09LHVMESX22","SP1R07C09LHVMESX27","SP1R07C09LHVMESX28","SP1R08C01LHVMESX29","SP1R08C01LHVMESX30","SP1R08C01LHVMESX31","SP1R08C01LHVMESX32","SP1R08C01LHVMESX33","SP1R08C01LHVMESX34","SP1R08C01LHVMESX35","SP1R08C01LHVMESX36","SP1R08C01LHVMESX37","SP1R08C01LHVMESX38","SP1R08C02LPLXHCT02","SP1R08C03LPLXHCT03","SP1R07C10LPLXHCP01","SP1R07C10LPLXHCP02","SDP1-helion-cp1-c1-m1-mgmt"]},{"portPos":{"node":0,"slot":2,"cardPort":3},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"20230002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":1,"slot":2,"cardPort":3},"failoverState":1,"device":[]},{"portPos":{"node":0,"slot":2,"cardPort":4},"mode":2,"linkState":5,"nodeWWN":"2FF70002AC01AF2B","portWWN":"20240002AC01AF2B","type":3,"protocol":1,"partnerPos":{"node":1,"slot":2,"cardPort":4},"failoverState":1,"device":[]},{"portPos":{"node":0,"slot":3,"cardPort":1},"mode":4,"linkState":4,"HWAddr":"480FCFA3C3E5","type":7,"protocol":4,"label":"IP0","device":[],"IPAddr":"172.28.70.138"},{"portPos":{"node":1,"slot":0,"cardPort":1},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"21010002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":0,"slot":0,"cardPort":1},"failoverState":1,"device":["SP1R08C01LHVMESX99","SP1R08C01LHVMESX99","SP1R07C09LHVMESX97","SP1R07C10LHVMESX98","SP1R07C09LHVMESX01","SP1R07C10LHVMESX15","SP1R07C10LHVMESX13","SP1R07C10LHVMESX16","SP1R07C10LHVMESX14","SP1R07C10LHVMESX12","SP1R07C09LHVMESX06","SP1R07C09LHVMESX04","SP1R07C09LHVMESX02","SP1R07C09LHVMESX05","SP1R07C09LHVMESX03","sp1r07c09lhvmesx07","SP1R07C10LHVMESX18","SP1R07C09lHVMESX08","SP1R07C10LHVMESX23","SP1R07C09LHVMESX09","SP1R07C10LHVMESX20","SP1R07C09LHVMESX11","SP1R07C10LHVMESX19","SP1R07C10LHVMESX24","SP1R07C10LHVMESX25","SP1R07C09LHVMESX10","SP1R07C10LHVMESX21","SP1R07C10LHVMESX26","SP1R07C09LHVMESX22","SP1R07C09LHVMESX27","SP1R07C09LHVMESX28","SP1R08C01LHVMESX29","SP1R08C01LHVMESX30","SP1R08C01LHVMESX31","SP1R08C01LHVMESX32","SP1R08C01LHVMESX33","SP1R08C01LHVMESX34","SP1R08C01LHVMESX35","SP1R08C01LHVMESX36","SP1R08C01LHVMESX37","SP1R08C01LHVMESX38","SP1R07C10LPLXHCP01","SP1R07C10LPLXHCP02"]},{"portPos":{"node":1,"slot":0,"cardPort":2},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"21020002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":0,"slot":0,"cardPort":2},"failoverState":1,"device":["SP1R08C01LHVMESX99","SP1R07C09LHVMESX97","SP1R07C10LHVMESX98","SP1R07C09LHVMESX01","SP1R07C10LHVMESX15","SP1R07C10LHVMESX13","SP1R07C10LHVMESX16","SP1R07C10LHVMESX14","SP1R07C10LHVMESX12","SP1R07C09LHVMESX06","SP1R07C09LHVMESX04","SP1R07C09LHVMESX02","SP1R07C09LHVMESX05","SP1R07C09LHVMESX03","sp1r07c09lhvmesx07","SP1R07C10LHVMESX18","SP1R07C09lHVMESX08","SP1R07C10LHVMESX23","SP1R07C09LHVMESX09","SP1R07C10LHVMESX20","SP1R07C09LHVMESX11","SP1R07C10LHVMESX19","SP1R07C10LHVMESX24","SP1R07C10LHVMESX25","SP1R07C09LHVMESX10","SP1R07C10LHVMESX21","SP1R07C10LHVMESX26","SP1R07C09LHVMESX22","SP1R07C09LHVMESX27","SP1R07C09LHVMESX28","SP1R08C01LHVMESX29","SP1R08C01LHVMESX30","SP1R08C01LHVMESX31","SP1R08C01LHVMESX32","SP1R08C01LHVMESX33","SP1R08C01LHVMESX34","SP1R08C01LHVMESX35","SP1R08C01LHVMESX36","SP1R08C01LHVMESX37","SP1R08C01LHVMESX38","SP1R07C10LPLXHCP01","SP1R07C10LPLXHCP02"]},{"portPos":{"node":1,"slot":1,"cardPort":1},"mode":3,"linkState":4,"nodeWWN":"50002ACFF701AF2B","portWWN":"50002AC11101AF2B","type":2,"protocol":5,"label":"DP-1","device":["cage0","cage2","cage3"]},{"portPos":{"node":1,"slot":1,"cardPort":2},"mode":3,"linkState":4,"nodeWWN":"50002ACFF701AF2B","portWWN":"50002AC11201AF2B","type":2,"protocol":5,"label":"DP-2","device":["cage1","cage4","cage5"]},{"portPos":{"node":1,"slot":2,"cardPort":1},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"21210002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":0,"slot":2,"cardPort":1},"failoverState":1,"device":["SP1R07C10LHVMESX15","SP1R07C10LHVMESX16","SP1R07C10LHVMESX14","SP1R07C10LHVMESX12","SP1R08C03LPLXHCO03","SDP-helion-cp1-comp0001-mgmt","SDP-helion-cp1-comp0003-mgmt","SDP-helion-cp1-comp0002-mgmt","sp1r07c09lhvmesx07","SP1R07C10LHVMESX18","SP1R07C09lHVMESX08","SP1R07C10LHVMESX23","SP1R07C09LHVMESX09","SP1R07C10LHVMESX20","SP1R07C09LHVMESX11","SP1R07C10LHVMESX19","SP1R07C10LHVMESX24","SP1R07C10LHVMESX25","SP1R07C09LHVMESX10","SP1R07C10LHVMESX21","SP1R07C10LHVMESX26","SP1R07C09LHVMESX22","SP1R07C09LHVMESX27","SP1R07C09LHVMESX28","SP1R08C01LHVMESX29","SP1R08C01LHVMESX30","SP1R08C01LHVMESX31","SP1R08C01LHVMESX32","SP1R08C01LHVMESX33","SP1R08C01LHVMESX34","SP1R08C01LHVMESX35","SP1R08C01LHVMESX36","SP1R08C01LHVMESX37","SP1R08C01LHVMESX38","SP1R08C02LPLXHCT02","SP1R08C03LPLXHCT03","SP1R07C10LPLXHCP01","SP1R07C10LPLXHCP02","SDP1-helion-cp1-c1-m1-mgmt"]},{"portPos":{"node":1,"slot":2,"cardPort":2},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"21220002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":0,"slot":2,"cardPort":2},"failoverState":1,"device":["SP1R07C10LHVMESX15","SP1R07C10LHVMESX16","SP1R07C10LHVMESX14","SP1R07C10LHVMESX12","SP1R08C02LPLXHCO02","SDP-helion-cp1-comp0001-mgmt","SDP-helion-cp1-comp0003-mgmt","SDP-helion-cp1-comp0002-mgmt","sp1r07c09lhvmesx07","SP1R07C10LHVMESX18","SP1R07C09lHVMESX08","SP1R07C10LHVMESX23","SP1R07C09LHVMESX09","SP1R07C10LHVMESX20","SP1R07C09LHVMESX11","SP1R07C10LHVMESX19","SP1R07C10LHVMESX24","SP1R07C10LHVMESX25","SP1R07C09LHVMESX10","SP1R07C10LHVMESX21","SP1R07C10LHVMESX26","SP1R07C09LHVMESX22","SP1R07C09LHVMESX27","SP1R07C09LHVMESX28","SP1R08C01LHVMESX29","SP1R08C01LHVMESX30","SP1R08C01LHVMESX31","SP1R08C01LHVMESX32","SP1R08C01LHVMESX33","SP1R08C01LHVMESX34","SP1R08C01LHVMESX35","SP1R08C01LHVMESX36","SP1R08C01LHVMESX37","SP1R08C01LHVMESX38","SP1R08C02LPLXHCT02","SP1R08C03LPLXHCT03","SP1R07C10LPLXHCP01","SP1R07C10LPLXHCP02","SDP1-helion-cp1-c1-m1-mgmt"]},{"portPos":{"node":1,"slot":2,"cardPort":3},"mode":2,"linkState":5,"nodeWWN":"2FF70002AC01AF2B","portWWN":"21230002AC01AF2B","type":3,"protocol":1,"partnerPos":{"node":0,"slot":2,"cardPort":3},"failoverState":1,"device":[]},{"portPos":{"node":1,"slot":2,"cardPort":4},"mode":2,"linkState":4,"nodeWWN":"2FF70002AC01AF2B","portWWN":"21240002AC01AF2B","type":1,"protocol":1,"partnerPos":{"node":0,"slot":2,"cardPort":4},"failoverState":1,"device":[]},{"portPos":{"node":1,"slot":3,"cardPort":1},"mode":4,"linkState":4,"HWAddr":"480FCFA3BD71","type":7,"protocol":4,"label":"IP1","device":[],"IPAddr":"172.28.70.130"}]}
<snip>

    and so failing.

  Once zone was modified to include all 3Par target ports it works just fine. So,if it´s valid to setup access to 3Par array not using all 3par target ports but a subset of them, code should be reviewed to look for volume through available targets.

  Should you need anything else just let me know.

Revision history for this message
Walt Boring (walter-boring) wrote :

This isn't a problem with os-brick. This is a bug in the 3PAR driver.

Revision history for this message
Walt Boring (walter-boring) wrote :

A workaround that might work is to specify the "use_multipath_for_image_xfer=True" in the driver 3par driver section of cinder.conf
Can also set
"enforce_multipath_for_image_xfer=True" which will test to make sure that multipathd is running on the cinder controller as well.

Revision history for this message
Pedro Rubio (prubio) wrote :

Thanks Walt. Indeed multipath was enabled and working fine. I will perform some more tests though.
Regarding the fact that it´s a 3par driver bug that´s something I wanted to clarify because, if this is valid configuration (which is a good question), 3par driver should know about the SAN configuration but this is not detected until os-brick checks it.

Revision history for this message
Walt Boring (walter-boring) wrote :

If multipath is enabled in the 3PAR driver section and multipathd is running, then a different codepath is taken in the 3PAR driver that will map all visible ports on the c-vol host and the 3PAR, and the volume attach should work. It's clear from the output listed in the bug that multipath isn't enabled as there is only a single initiator_target_port mapping.

Make sure you follow what I mentioned in comment #2. The 3par driver stanza in cinder.conf needs those 2 options specified.

Revision history for this message
Walt Boring (walter-boring) wrote :

It also looks like the FC fabric is manually zoned and isn't zoned to include the port picked by the 3PAR driver when running with multipath capability disabled in cinder.

The only way to fix this is to,
1) enabled multipath correctly in cinder.conf
2) since using manual zoning, add the 3par port in question to the FC zone on the switch
3) use the Cinder Fiber Channel Zone manager to manage the fabric zoning.

4) add a new feature to the 3PAR driver to allow the admin to specify which NSP on the 3PAR to use when attaching when multipath isn't enabled correctly. The 3PAR driver is currently automatically picking the first NSP it sees that's enabled on the 3PAR.

Revision history for this message
Walt Boring (walter-boring) wrote :

note, this is NOT an os-brick bug.

Changed in os-brick:
status: New → Opinion
importance: Undecided → Low
affects: os-brick → cinder
summary: - OS-brick FC scan fails when 3Par is not exported to controllers/computes
- through all FC targets
+ 3PAR driver picks wrong port when not in multipath mode.
Sneha Rai (sneharai4)
Changed in cinder:
assignee: nobody → Sneha Rai (sneharai4)
Changed in cinder:
assignee: Sneha Rai (sneharai4) → nobody
assignee: nobody → Raghavendra Tilay (raghavendrat)
Changed in cinder:
status: Opinion → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.opendev.org/657585
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=b7c81a7b426136314fb12f5aafcb977b25d49cc6
Submitter: Zuul
Branch: master

commit b7c81a7b426136314fb12f5aafcb977b25d49cc6
Author: raghavendrat <email address hidden>
Date: Tue May 7 05:30:48 2019 -0700

    3PAR: Provide new option to specify NSP for single path attachments

    This fix aims to resolve below mentioned bugs:
    https://bugs.launchpad.net/os-brick/+bug/1812665
    https://bugs.launchpad.net/cinder/+bug/1809249
    https://bugs.launchpad.net/cinder/+bug/1734917

    Given a system connected to HPE 3PAR via FC and multipath is disabled.
    When user tries to create bootable volume, it fails intermittently
    with following error:
    Fibre Channel volume device not found

    This happens when a zone is created using second or later target nsp
    from 3PAR backend.
    In this case, HPE 3PAR client code picks up first target nsp to form
    initiator target map.

    To avoid above mentioned failure, user can specify target nsp in 3PAR
    backend section of cinder.conf as follows:

    hpe3par_target_nsp = <n:s:p>

    This target information is read from cinder.conf and respective
    wwn information is fetched.
    Later initiator target map is created using wwn information and
    bootable volume is created successfully.

    Change-Id: If77d384afbc7097ed55a03e9b4bc7f8e1966a5c5
    Closes bug: #1809249

Changed in cinder:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (stable/stein)

Fix proposed to branch: stable/stein
Review: https://review.opendev.org/676728

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (stable/rocky)

Fix proposed to branch: stable/rocky
Review: https://review.opendev.org/677215

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.opendev.org/677232

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on cinder (stable/rocky)

Change abandoned by Keith Berger (<email address hidden>) on branch: stable/rocky
Review: https://review.opendev.org/677215
Reason: master change has been reverted, will do a new cherry-pick once the new patch in master has merged and been cherry-picked to Stein

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on cinder (stable/stein)

Change abandoned by Keith Berger (<email address hidden>) on branch: stable/stein
Review: https://review.opendev.org/676728
Reason: master change has been reverted, will do a new cherry-pick once the new patch in master has merged

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (stable/stein)

Fix proposed to branch: stable/stein
Review: https://review.opendev.org/677800

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (stable/rocky)

Fix proposed to branch: stable/rocky
Review: https://review.opendev.org/677801

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.opendev.org/677232
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=9e122f11661b766b2fe2c3a12bab35ef6e14f3f1
Submitter: Zuul
Branch: master

commit 9e122f11661b766b2fe2c3a12bab35ef6e14f3f1
Author: raghavendrat <email address hidden>
Date: Tue May 7 05:30:48 2019 -0700

    3PAR: Add config for NSP single path attach

    This fix aims to resolve below mentioned bugs:
    https://bugs.launchpad.net/os-brick/+bug/1812665
    https://bugs.launchpad.net/cinder/+bug/1809249
    https://bugs.launchpad.net/cinder/+bug/1734917

    Given a system connected to HPE 3PAR via FC and multipath is disabled.
    When user tries to create bootable volume, it fails intermittently
    with following error:

        Fibre Channel volume device not found

    This happens when a zone is created using second or later target nsp
    from 3PAR backend. In this case, HPE 3PAR client code picks up first
    target nsp to form initiator target map.

    To avoid above mentioned failure, user can specify target nsp in 3PAR
    backend section of cinder.conf as follows:

        hpe3par_target_nsp = <n:s:p>

    This target information is read from cinder.conf and respective
    wwn information is fetched. Later initiator target map is created
    using wwn information and bootable volume is created successfully.

    Change-Id: I2da5d4a0334f07967af5ff7aaa39a0ecc4b12204
    Closes-bug: #1809249
    Closes-bug: #1812665
    Closes-bug: #1734917

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.opendev.org/678037

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (stable/stein)

Reviewed: https://review.opendev.org/677800
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=25a9b4027ab9f61180519983376e4b34d4799412
Submitter: Zuul
Branch: stable/stein

commit 25a9b4027ab9f61180519983376e4b34d4799412
Author: raghavendrat <email address hidden>
Date: Tue May 7 05:30:48 2019 -0700

    3PAR: Add config for NSP single path attach

    This fix aims to resolve below mentioned bugs:
    https://bugs.launchpad.net/os-brick/+bug/1812665
    https://bugs.launchpad.net/cinder/+bug/1809249
    https://bugs.launchpad.net/cinder/+bug/1734917

    Given a system connected to HPE 3PAR via FC and multipath is disabled.
    When user tries to create bootable volume, it fails intermittently
    with following error:

        Fibre Channel volume device not found

    This happens when a zone is created using second or later target nsp
    from 3PAR backend. In this case, HPE 3PAR client code picks up first
    target nsp to form initiator target map.

    To avoid above mentioned failure, user can specify target nsp in 3PAR
    backend section of cinder.conf as follows:

        hpe3par_target_nsp = <n:s:p>

    This target information is read from cinder.conf and respective
    wwn information is fetched. Later initiator target map is created
    using wwn information and bootable volume is created successfully.

    Change-Id: I2da5d4a0334f07967af5ff7aaa39a0ecc4b12204
    Closes-bug: #1809249
    Closes-bug: #1812665
    Closes-bug: #1734917
    (cherry picked from commit 9e122f11661b766b2fe2c3a12bab35ef6e14f3f1)

tags: added: in-stable-stein
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (stable/rocky)

Reviewed: https://review.opendev.org/677801
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=df7fd514a77734e5d57a56d5c0f34d0245efc86c
Submitter: Zuul
Branch: stable/rocky

commit df7fd514a77734e5d57a56d5c0f34d0245efc86c
Author: raghavendrat <email address hidden>
Date: Tue May 7 05:30:48 2019 -0700

    3PAR: Add config for NSP single path attach

    This fix aims to resolve below mentioned bugs:
    https://bugs.launchpad.net/os-brick/+bug/1812665
    https://bugs.launchpad.net/cinder/+bug/1809249
    https://bugs.launchpad.net/cinder/+bug/1734917

    Given a system connected to HPE 3PAR via FC and multipath is disabled.
    When user tries to create bootable volume, it fails intermittently
    with following error:

        Fibre Channel volume device not found

    This happens when a zone is created using second or later target nsp
    from 3PAR backend. In this case, HPE 3PAR client code picks up first
    target nsp to form initiator target map.

    To avoid above mentioned failure, user can specify target nsp in 3PAR
    backend section of cinder.conf as follows:

        hpe3par_target_nsp = <n:s:p>

    This target information is read from cinder.conf and respective
    wwn information is fetched. Later initiator target map is created
    using wwn information and bootable volume is created successfully.

    Change-Id: I2da5d4a0334f07967af5ff7aaa39a0ecc4b12204
    Closes-bug: #1809249
    Closes-bug: #1812665
    Closes-bug: #1734917

    (cherry picked from commit 9e122f11661b766b2fe2c3a12bab35ef6e14f3f1)

tags: added: in-stable-rocky
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 15.0.0.0rc1

This issue was fixed in the openstack/cinder 15.0.0.0rc1 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (stable/queens)

Reviewed: https://review.opendev.org/678037
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=9d4e3674dbba71182122557fa1bb04c3afdf9f91
Submitter: Zuul
Branch: stable/queens

commit 9d4e3674dbba71182122557fa1bb04c3afdf9f91
Author: raghavendrat <email address hidden>
Date: Tue May 7 05:30:48 2019 -0700

    3PAR: Add config for NSP single path attach

    This fix aims to resolve below mentioned bugs:
    https://bugs.launchpad.net/os-brick/+bug/1812665
    https://bugs.launchpad.net/cinder/+bug/1809249
    https://bugs.launchpad.net/cinder/+bug/1734917

    Given a system connected to HPE 3PAR via FC and multipath is disabled.
    When user tries to create bootable volume, it fails intermittently
    with following error:

        Fibre Channel volume device not found

    This happens when a zone is created using second or later target nsp
    from 3PAR backend. In this case, HPE 3PAR client code picks up first
    target nsp to form initiator target map.

    To avoid above mentioned failure, user can specify target nsp in 3PAR
    backend section of cinder.conf as follows:

        hpe3par_target_nsp = <n:s:p>

    This target information is read from cinder.conf and respective
    wwn information is fetched. Later initiator target map is created
    using wwn information and bootable volume is created successfully.

    Change-Id: I2da5d4a0334f07967af5ff7aaa39a0ecc4b12204
    Closes-bug: #1809249
    Closes-bug: #1812665
    Closes-bug: #1734917
    (cherry picked from commit 9e122f11661b766b2fe2c3a12bab35ef6e14f3f1)

tags: added: in-stable-queens
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 14.0.2

This issue was fixed in the openstack/cinder 14.0.2 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 13.0.7

This issue was fixed in the openstack/cinder 13.0.7 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 12.0.10

This issue was fixed in the openstack/cinder 12.0.10 release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.