FC connect_volume cannot handle multiple luns

Bug #1774293 reported by Patrick East on 2018-05-31
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
os-brick
Undecided
Patrick East

Bug Description

In the code for FibreChannelConnector we specify for connect volume that the connection properties required are:

<snip>

@utils.trace
@synchronized('connect_volume')
def connect_volume(self, connection_properties):
    """Attach the volume to instance_name.

    :param connection_properties: The dictionary that describes all
                                  of the target volume attributes.
    :type connection_properties: dict
    :returns: dict

    connection_properties for Fibre Channel must include:
    target_wwn - World Wide Name
    target_lun - LUN id of the volume
    """

<snip>

But we can actually support a list for "target_wwn" for multipath. Well, if you have multipath connections with not only multiple wwn's but also multiple lun's we cannot handle it correctly.

A quick audit of the code reveals some assumptions on a single lun and a simple test
of returning a list from a driver gives us some fun errors like:

May 30 23:43:42 fc-cinder-dev-3 nova-compute[6116]: DEBUG nova.virt.libvirt.volume.fibrechannel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Calling os-brick to attach FC Volume {{(pid=6116) connect_volume /opt/stack/nova/nova/virt/libvirt/volume/fibrechannel.py:53}}
May 30 23:43:42 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] ==> connect_volume: call u"{'args': (<os_brick.initiator.connectors.fibre_channel.FibreChannelConnector object at 0x7f8d61433a10>, {u'initiator_target_map': {u'21000024ff50add1': [u'524A937377E70300', u'524A937377E70301', u'524A937377E70310', u'524A937377E70311'], u'21000024ff50add0': [u'524A937377E70300', u'524A937377E70301', u'524A937377E70310', u'524A937377E70311']}, u'target_discovered': True, u'encrypted': False, u'qos_specs': None, u'discard': True, u'target_lun': [1, 1, 1, 1], u'access_mode': u'rw', u'target_wwn': [u'524A937377E70300', u'524A937377E70301', u'524A937377E70310', u'524A937377E70311']}), 'kwargs': {}}" {{(pid=6116) trace_logging_wrapper /usr/local/lib/python2.7/dist-packages/os_brick/utils.py:146}}
May 30 23:43:42 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.lockutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Lock "connect_volume" acquired by "os_brick.initiator.connectors.fibre_channel.connect_volume" :: waited 0.000s {{(pid=6116) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273}}
May 30 23:43:42 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] execute = <bound method FibreChannelConnector._execute of <os_brick.initiator.connectors.fibre_channel.FibreChannelConnector object at 0x7f8d61433a10>> {{(pid=6116) connect_volume /usr/local/lib/python2.7/dist-packages/os_brick/initiator/connectors/fibre_channel.py:126}}
May 30 23:43:42 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo.privsep.daemon [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] privsep: request[140245044767632]: (3, 'os_brick.privileged.rootwrap.execute_root', ('systool', '-c', 'fc_host', '-v'), {}) {{(pid=6258) loop /usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:443}}
May 30 23:43:42 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Running cmd (subprocess): systool -c fc_host -v {{(pid=6258) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:372}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] CMD "systool -c fc_host -v" returned: 0 in 1.289s {{(pid=6258) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:409}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo.privsep.daemon [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] privsep: reply[140245044767632]: (4, ('Class = "fc_host"\n\n Class Device = "host2"\n Class Device path = "/sys/devices/pci0000:00/0000:00:06.0/host2/fc_host/host2"\n dev_loss_tmo = "30"\n fabric_name = "0x2fa28c604f722601"\n issue_lip = <store method only>\n max_npiv_vports = "0"\n node_name = "0x20000024ff50add1"\n npiv_vports_inuse = "0"\n port_id = "0x172000"\n port_name = "0x21000024ff50add1"\n port_state = "Online"\n port_type = "NPort (fabric via point-to-point)"\n speed = "8 Gbit"\n supported_classes = "Class 3"\n supported_speeds = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"\n symbolic_name = "QLE2562 FW:v5.09.00 DVR:v8.07.00.26-k"\n system_hostname = ""\n tgtid_bind_type = "wwpn (World Wide Port Name)"\n uevent = \n vport_create = <store method only>\n vport_delete = <store method only>\n\n Device = "host2"\n Device path = "/sys/devices/pci0000:00/0000:00:06.0/host2"\n fw_dump = \n issue_logo = <store method only>\n nvram = "ISP \x01"\n optrom_ctl = <store method only>\n optrom = \n reset = <store method only>\n sfp = "\x03\x04\x07"\n uevent = "DEVTYPE=scsi_host"\n vpd = "\x82."\n\n\n Class Device = "host3"\n Class Device path = "/sys/devices/pci0000:00/0000:00:07.0/host3/fc_host/host3"\n dev_loss_tmo = "30"\n fabric_name = "0x20018c604f9334f1"\n issue_lip = <store method only>\n max_npiv_vports = "0"\n node_name = "0x20000024ff50add0"\n npiv_vports_inuse = "0"\n port_id = "0xaa0d00"\n port_name = "0x21000024ff50add0"\n port_state
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: = "Online"\n port_type = "NPort (fabric via point-to-point)"\n speed = "8 Gbit"\n supported_classes = "Class 3"\n supported_speeds = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"\n symbolic_name = "QLE2562 FW:v5.09.00 DVR:v8.07.00.26-k"\n system_hostname = ""\n tgtid_bind_type = "wwpn (World Wide Port Name)"\n uevent = \n vport_create = <store method only>\n vport_delete = <store method only>\n\n Device = "host3"\n Device path = "/sys/devices/pci0000:00/0000:00:07.0/host3"\n fw_dump = \n issue_logo = <store method only>\n nvram = "ISP \x01"\n optrom_ctl = <store method only>\n optrom = \n reset = <store method only>\n sfp = "\x03\x04\x07"\n uevent = "DEVTYPE=scsi_host"\n vpd = "\x82."\n\n\n', '')) {{(pid=6258) loop /usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:456}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:00:06.0-fc-0x524a937377e70300-lun-[1, 1, 1, 1] {{(pid=6116) _wait_for_device_discovery /usr/local/lib/python2.7/dist-packages/os_brick/initiator/connectors/fibre_channel.py:144}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:00:06.0-fc-0x524a937377e70301-lun-[1, 1, 1, 1] {{(pid=6116) _wait_for_device_discovery /usr/local/lib/python2.7/dist-packages/os_brick/initiator/connectors/fibre_channel.py:144}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:00:06.0-fc-0x524a937377e70310-lun-[1, 1, 1, 1] {{(pid=6116) _wait_for_device_discovery /usr/local/lib/python2.7/dist-packages/os_brick/initiator/connectors/fibre_channel.py:144}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:00:06.0-fc-0x524a937377e70311-lun-[1, 1, 1, 1] {{(pid=6116) _wait_for_device_discovery /usr/local/lib/python2.7/dist-packages/os_brick/initiator/connectors/fibre_channel.py:144}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:00:07.0-fc-0x524a937377e70300-lun-[1, 1, 1, 1] {{(pid=6116) _wait_for_device_discovery /usr/local/lib/python2.7/dist-packages/os_brick/initiator/connectors/fibre_channel.py:144}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:00:07.0-fc-0x524a937377e70301-lun-[1, 1, 1, 1] {{(pid=6116) _wait_for_device_discovery /usr/local/lib/python2.7/dist-packages/os_brick/initiator/connectors/fibre_channel.py:144}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:00:07.0-fc-0x524a937377e70310-lun-[1, 1, 1, 1] {{(pid=6116) _wait_for_device_discovery /usr/local/lib/python2.7/dist-packages/os_brick/initiator/connectors/fibre_channel.py:144}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:00:07.0-fc-0x524a937377e70311-lun-[1, 1, 1, 1] {{(pid=6116) _wait_for_device_discovery /usr/local/lib/python2.7/dist-packages/os_brick/initiator/connectors/fibre_channel.py:144}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: INFO os_brick.initiator.connectors.fibre_channel [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Fibre Channel volume device not yet found. Will rescan & retry. Try number: 0.
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.linuxfc [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Rescaning HBAs [{'host_device': u'host2', 'device_path': u'/sys/devices/pci0000:00/0000:00:06.0/host2/fc_host/host2', 'port_name': u'21000024ff50add1', 'node_name': u'20000024ff50add1'}, {'host_device': u'host3', 'device_path': u'/sys/devices/pci0000:00/0000:00:07.0/host3/fc_host/host3', 'port_name': u'21000024ff50add0', 'node_name': u'20000024ff50add0'}] with connection properties {u'initiator_target_map': {u'21000024ff50add1': [u'524A937377E70300', u'524A937377E70301', u'524A937377E70310', u'524A937377E70311'], u'21000024ff50add0': [u'524A937377E70300', u'524A937377E70301', u'524A937377E70310', u'524A937377E70311']}, u'target_discovered': True, u'encrypted': False, u'qos_specs': None, u'discard': True, u'target_lun': [1, 1, 1, 1], u'access_mode': u'rw', u'target_wwn': [u'524A937377E70300', u'524A937377E70301', u'524A937377E70310', u'524A937377E70311']} {{(pid=6116) rescan_hosts /usr/local/lib/python2.7/dist-packages/os_brick/initiator/linuxfc.py:82}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.linuxfc [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Using initiator target map to exclude HBAs {{(pid=6116) rescan_hosts /usr/local/lib/python2.7/dist-packages/os_brick/initiator/linuxfc.py:90}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Running cmd (subprocess): grep -Gil "524A937377E70300\|524A937377E70301\|524A937377E70310\|524A937377E70311" /sys/class/fc_transport/target2:*/port_name {{(pid=6116) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:372}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] CMD "grep -Gil "524A937377E70300\|524A937377E70301\|524A937377E70310\|524A937377E70311" /sys/class/fc_transport/target2:*/port_name" returned: 2 in 0.020s {{(pid=6116) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:409}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] u'grep -Gil "524A937377E70300\\|524A937377E70301\\|524A937377E70310\\|524A937377E70311" /sys/class/fc_transport/target2:*/port_name' failed. Not Retrying. {{(pid=6116) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:457}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.linuxfc [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Could not get HBA channel and SCSI target ID, path: /sys/class/fc_transport/target2:*, reason: Unexpected error while running command.
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: Command: grep -Gil "524A937377E70300\|524A937377E70301\|524A937377E70310\|524A937377E70311" /sys/class/fc_transport/target2:*/port_name
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: Exit code: 2
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: Stdout: u''
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: Stderr: u'grep: /sys/class/fc_transport/target2:*/port_name: No such file or directory\n' {{(pid=6116) _get_hba_channel_scsi_target /usr/local/lib/python2.7/dist-packages/os_brick/initiator/linuxfc.py:76}}
May 30 23:43:43 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Running cmd (subprocess): grep -Gil "524A937377E70300\|524A937377E70301\|524A937377E70310\|524A937377E70311" /sys/class/fc_transport/target3:*/port_name {{(pid=6116) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:372}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] CMD "grep -Gil "524A937377E70300\|524A937377E70301\|524A937377E70310\|524A937377E70311" /sys/class/fc_transport/target3:*/port_name" returned: 2 in 0.026s {{(pid=6116) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:409}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] u'grep -Gil "524A937377E70300\\|524A937377E70301\\|524A937377E70310\\|524A937377E70311" /sys/class/fc_transport/target3:*/port_name' failed. Not Retrying. {{(pid=6116) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:457}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.linuxfc [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Could not get HBA channel and SCSI target ID, path: /sys/class/fc_transport/target3:*, reason: Unexpected error while running command.
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Command: grep -Gil "524A937377E70300\|524A937377E70301\|524A937377E70310\|524A937377E70311" /sys/class/fc_transport/target3:*/port_name
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Exit code: 2
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Stdout: u''
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Stderr: u'grep: /sys/class/fc_transport/target3:*/port_name: No such file or directory\n' {{(pid=6116) _get_hba_channel_scsi_target /usr/local/lib/python2.7/dist-packages/os_brick/initiator/linuxfc.py:76}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: DEBUG os_brick.initiator.linuxfc [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Scanning host host2 (wwnn: 20000024ff50add1, c: -, t: -, l: [1, 1, 1, 1]) {{(pid=6116) rescan_hosts /usr/local/lib/python2.7/dist-packages/os_brick/initiator/linuxfc.py:117}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo.privsep.daemon [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] privsep: request[140245198948624]: (3, 'os_brick.privileged.rootwrap.execute_root', ('tee', '-a', u'/sys/class/scsi_host/host2/scan'), {'process_input': '- - [1, 1, 1, 1]'}) {{(pid=6258) loop /usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:443}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] Running cmd (subprocess): tee -a /sys/class/scsi_host/host2/scan {{(pid=6258) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:372}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] CMD "tee -a /sys/class/scsi_host/host2/scan" returned: 1 in 0.006s {{(pid=6258) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:409}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo_concurrency.processutils [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] u'tee -a /sys/class/scsi_host/host2/scan' failed. Not Retrying. {{(pid=6258) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:457}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: DEBUG oslo.privsep.daemon [None req-67a45e4f-5065-4175-9a2b-91b05ec850c8 admin admin] privsep: Exception during request[140245198948624]: Unexpected error while running command.
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Command: tee -a /sys/class/scsi_host/host2/scan
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Exit code: 1
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Stdout: u'- - [1, 1, 1, 1]'
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Stderr: u'tee: /sys/class/scsi_host/host2/scan: Invalid argument\n' {{(pid=6258) loop /usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:449}}
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Traceback (most recent call last):
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: File "/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 445, in loop
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: reply = self._process_cmd(*msg)
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: File "/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 428, in _process_cmd
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: ret = func(*f_args, **f_kwargs)
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: File "/usr/local/lib/python2.7/dist-packages/oslo_privsep/priv_context.py", line 209, in _wrap
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: return func(*args, **kwargs)
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: File "/usr/local/lib/python2.7/dist-packages/os_brick/privileged/rootwrap.py", line 194, in execute_root
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: return custom_execute(*cmd, shell=False, run_as_root=False, **kwargs)
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: File "/usr/local/lib/python2.7/dist-packages/os_brick/privileged/rootwrap.py", line 143, in custom_execute
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: on_completion=on_completion, *cmd, **kwargs)
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 424, in execute
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: cmd=sanitized_cmd)
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: ProcessExecutionError: Unexpected error while running command.
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Command: tee -a /sys/class/scsi_host/host2/scan
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Exit code: 1
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Stdout: u'- - [1, 1, 1, 1]'
May 30 23:43:44 fc-cinder-dev-3 nova-compute[6116]: Stderr: u'tee: /sys/class/scsi_host/host2/scan: Invalid argument\n'

We should be able to handle this with a little bit of adjustment, especially when most of the
linux scsi code in os-brick is already capable of handling multiple luns for the iSCSI multipath
code.

A easy option is just making "target_lun" accept a list, similar to "target_wwn". Or we can
add in "target_wwns" and "target_luns" similar to what the iSCSI connectors do.

Changed in os-brick:
assignee: nobody → Patrick East (patrick-east)
Changed in os-brick:
status: New → In Progress

Change abandoned by Patrick East (<email address hidden>) on branch: master
Review: https://review.openstack.org/571332

Patrick East (patrick-east) wrote :

:( Guess the status didn't get updated. The changes have been restored: https://review.openstack.org/#/c/571332/

Reviewed: https://review.openstack.org/571332
Committed: https://git.openstack.org/cgit/openstack/os-brick/commit/?id=ba2569855d2f4a2a639205ed73cf20cdd3abda10
Submitter: Zuul
Branch: master

commit ba2569855d2f4a2a639205ed73cf20cdd3abda10
Author: Patrick East <email address hidden>
Date: Mon Jul 16 10:31:52 2018 -0700

    FC Allow for multipath volumes with different LUNs

    We made assumptions in the fibre channel connector code that
    there was only ever a single lun per volume, even with many
    wwns per connections. There is need to support multiple luns
    per multipath device, similar to how the iSCSI volumes work.

    What we do is allow a list for 'target_luns' and 'target_wwns'
    in the connection properties, similar to how the iSCSI connector
    treats things like 'target_portals', 'target_luns', etc. we
    then group together 'targets' as combination of wwpns and the
    lun associated with them. This grouping is used to through
    the attach and detach workflow now to determine dev paths and
    scsi target information for rescans.

    All existing calls with 'target_lun' and 'target_wwn' will
    continue working as before, the new plural keys are optional.

    Change-Id: I393a028457a162228666d8497b695984fefdfab4
    Closes-Bug: #1774293

Changed in os-brick:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers