multiattach does not work with LVM+LIO target

Bug #1786327 reported by Eric Harney
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
Medium
Lee Yarwood
Nominated for Queens by Matt Riedemann
Nominated for Rocky by Matt Riedemann

Bug Description

The LVM driver indicates that multiattach is supported, but this depends on which iSCSI target is being used. It does not work with the LIO target.

This is because the LIO target tracks iSCSI authentication credentials on a per-hypervisor basis, which no longer works correctly if the same volume is attached twice on the same hypervisor.

$ export OS_VOLUME_API_VERSION=3.50
$ cinder create 1 <snipped>
+--------------------------------+----------------------------------------------+
| Property | Value |
+--------------------------------+----------------------------------------------+
| id | 8bd0f068-5361-45e2-8e80-391ccb72310a |
| multiattach | True |
+--------------------------------+----------------------------------------------+
$ nova boot --image cirros-0.3.5-x86_64-disk --flavor m1.nano vm1
$ nova boot --image cirros-0.3.5-x86_64-disk --flavor m1.nano vm2

$ nova list
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------+
| 21fd6622-b919-4ddf-bfd4-3d63a8627843 | vm1 | ACTIVE | - | Running | private=10.0.0.3, fd7b:5e5c:7b68:0:f816:3eff:fec6:16ff |
| 43c1e472-9b50-4241-989e-be6b7f0e4e09 | vm2 | ACTIVE | - | Running | private=10.0.0.8, fd7b:5e5c:7b68:0:f816:3eff:fe31:1f8d |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------+

$ sudo targetcli ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 0]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 0]
  o- loopback ......................................................................................................... [Targets: 0]
  o- vhost ............................................................................................................ [Targets: 0]

$ cinder show 8bd0f068-5361-45e2-8e80-391ccb72310a <snipped>
+--------------------------------+----------------------------------------------+
| Property | Value |
+--------------------------------+----------------------------------------------+
| attached_servers | ['21fd6622-b919-4ddf-bfd4-3d63a8627843'] |
| attachment_ids | ['bb8e3f48-020c-434d-a8b6-8ddd274def16'] |
| id | 8bd0f068-5361-45e2-8e80-391ccb72310a |
| multiattach | True |
| status | in-use |

$ sudo targetcli ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 1]
  | | o- iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a [/dev/stack-volumes-lvmdriver-1/volume-8bd0f068-5361-45e2-8e80-391ccb72310a (1.0GiB) write-thru activated]
  | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a ............................................. [TPGs: 1]
  | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
  | o- acls .......................................................................................................... [ACLs: 1]
  | | o- iqn.1994-05.com.redhat:ca93ebd8c187 ...................................................... [1-way auth, Mapped LUNs: 1]
  | | o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a (rw)]
  | o- luns .......................................................................................................... [LUNs: 1]
  | | o- lun0 [block/iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a (/dev/stack-volumes-lvmdriver-1/volume-8bd0f068-5361-45e2-8e80-391ccb72310a) (default_tg_pt_gp)]
  | o- portals .................................................................................................... [Portals: 1]
  | o- 10.16.149.255:3260 ............................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
  o- vhost ............................................................................................................ [Targets: 0]

$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+
| 8bd0f068-5361-45e2-8e80-391ccb72310a | in-use | - | 1 | lvmdriver-1 | false | 21fd6622-b919-4ddf-bfd4-3d63a8627843,43c1e472-9b50-4241-989e-be6b7f0e4e09 |
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+

$ targetcli ls is the same as above (same n-cpu node, because same instance used)

$ nova volume-detach vm2 8bd0f068-5361-45e2-8e80-391ccb72310a
$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| 8bd0f068-5361-45e2-8e80-391ccb72310a | in-use | - | 1 | lvmdriver-1 | false | 21fd6622-b919-4ddf-bfd4-3d63a8627843 |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+

$ sudo targetcli ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 1]
  | | o- iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a [/dev/stack-volumes-lvmdriver-1/volume-8bd0f068-5361-45e2-8e80-391ccb72310a (1.0GiB) write-thru activated]
  | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a ............................................. [TPGs: 1]
  | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
  | o- acls .......................................................................................................... [ACLs: 0]
  | o- luns .......................................................................................................... [LUNs: 1]
  | | o- lun0 [block/iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a (/dev/stack-volumes-lvmdriver-1/volume-8bd0f068-5361-45e2-8e80-391ccb72310a) (default_tg_pt_gp)]
  | o- portals .................................................................................................... [Portals: 1]
  | o- 10.16.149.255:3260 ............................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
  o- vhost ............................................................................................................ [Targets: 0]

^ no more mapped_lun, auth removed

$ nova volume-detach vm1 8bd0f068-5361-45e2-8e80-391ccb72310a
<hangs with volume in "detaching" state for a while>

Nova compute error:
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __e>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server self.force_reraise()
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in for>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/block_device.py", line 314, in driver_detach
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server encryption=encryption)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1621, in detach_volume
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server encryption=encryption)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1264, in _disconnect_vo>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server vol_driver.disconnect_volume(connection_info, instance)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/volume/iscsi.py", line 74, in disconnect>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server self.connector.disconnect_volume(connection_info['data'], None)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/utils.py", line 150, in trace_logging_wrapper
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server result = f(*args, **kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274,>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server return f(*args, **kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/connectors/iscsi.py", line 848, in >
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server ignore_errors=ignore_errors)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/connectors/iscsi.py", line 885, in >
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server force, exc)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/linuxscsi.py", line 223, in remove_>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server self.remove_scsi_device('/dev/' + device_name, force, exc)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/linuxscsi.py", line 70, in remove_s>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server self.flush_device_io(device)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/linuxscsi.py", line 262, in flush_d>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server interval=10, root_helper=self._root_helper)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/executor.py", line 52, in _execute
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server result = self.__execute(*args, **kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/privileged/rootwrap.py", line 169, in execute
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server return execute_root(*cmd, **kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 207, >
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server return self.channel.remote_call(name, args, kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in rem>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server raise exc_type(*result[2])
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server ProcessExecutionError: Unexpected error while running command.
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server Command: blockdev --flushbufs /dev/sdb
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server Exit code: 1
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server Stdout: u''
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server Stderr: u'blockdev: cannot open /dev/sdb: No such device or address\n'

Volume reverts to "in-use" state:
$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| 8bd0f068-5361-45e2-8e80-391ccb72310a | in-use | - | 1 | lvmdriver-1 | false | 21fd6622-b919-4ddf-bfd4-3d63a8627843 |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+

Kernel reports iscsi auth errors:

Aug 09 14:06:46 fedora28.localdomain kernel: Unable to locate Target IQN: iqn.2010-10.org.openstack:volume-cc694275-c9be-4714-aa74-012d4d472ff4 in Storage Node
Aug 09 14:06:46 fedora28.localdomain kernel: iSCSI Login negotiation failed.
Aug 09 14:06:46 fedora28.localdomain kernel: iSCSI Initiator Node: iqn.1994-05.com.redhat:ca93ebd8c187 is not authorized to access iSCSI target portal group: 1.
Aug 09 14:06:46 fedora28.localdomain kernel: iSCSI Login negotiation failed.

Device is dead:
$ ls -l /dev/sdb
brw-rw----. 1 root disk 8, 16 Aug 9 14:01 /dev/sdb
$ sudo xxd /dev/sdb
xxd: /dev/sdb: No such device or address

Boxiang Zhu (bxzhu-5355)
Changed in cinder:
status: New → Confirmed
Revision history for this message
Boxiang Zhu (bxzhu-5355) wrote :
Download full text (8.8 KiB)

Reproduce the issue. openstack env was created by devstack with allinone mode.

OS: centos 7.4.1708
Kernel: 3.10.0-693.2.2
libvirt: 3.9.0
qemu: 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.14.1)
OpenStack: master(rocky)

Steps as followed:
1. after env created by devstack, create a multiattach volume-type, add volume_backend_name and multiattach metadata for it
2. create two servers and one volume(with multiattach type)
3. nova list
+--------------------------------------+------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+--------------------+
| 23f54b0b-7971-442d-a286-47ab5b9ef965 | vm01 | ACTIVE | - | Running | public=172.24.4.3 |
| 887e5741-aeb4-4713-aba0-3dac65997663 | vm02 | ACTIVE | - | Running | public=172.24.4.13 |
+--------------------------------------+------+--------+------------+-------------+--------------------+
cinder list
+--------------------------------------+-----------+-------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------+------+-------------+----------+--------------------------------------+
| 0b019366-f402-472a-b786-7f66b1035352 | in-use | | 1 | multiattach | true | 23f54b0b-7971-442d-a286-47ab5b9ef965 |
| 90eacbe8-a3a5-4688-99c2-edc1a5e9efbf | in-use | | 1 | multiattach | true | 887e5741-aeb4-4713-aba0-3dac65997663 |
| fb2b767f-1652-45e8-9e75-d0a581af00c4 | available | vol01 | 1 | multiattach | false | |
+--------------------------------------+-----------+-------+------+-------------+----------+--------------------------------------+
4. nova --debug volume-attach vm01 fb2b767f-1652-45e8-9e75-d0a581af00c4
5. nova --debug volume-attach vm02 fb2b767f-1652-45e8-9e75-d0a581af00c4
6. after attach volume to two servers, check the volumes' status and iscsi
cinder list
+--------------------------------------+--------+-------+------+-------------+----------+---------------------------------------------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+-------+------+-------------+----------+---------------------------------------------------------------------------+
| 0b019366-f402-472a-b786-7f66b1035352 | in-use | | 1 | multiattach | true | 23f54b0b-7971-442d-a286-47ab5b9ef965 |
| 90eacbe8-a3a5-4688-99c2-edc1a5e9efbf | in-use | | 1 | multiattach | true | 887e5741-aeb4-4713-aba0-3dac65997663 |
| fb2b767f-1652-45e8-9e75-d0a581af00c4 | in-use | vol01 | 1 | multiattach | false | 23f54b0b-7971-442d-a286-47ab5b9ef965,887e5741-aeb4-4713-ab...

Read more...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to cinder (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/595231

Matt Riedemann (mriedem)
Changed in cinder:
importance: Undecided → Medium
tags: added: lvm multiattach
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to cinder (master)

Reviewed: https://review.openstack.org/595231
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=b4883db7c09d8254bfbe34669873919dbc008943
Submitter: Zuul
Branch: master

commit b4883db7c09d8254bfbe34669873919dbc008943
Author: Eric Harney <email address hidden>
Date: Wed Aug 22 11:00:32 2018 -0400

    LVM: Disable multiattach for LIO iSCSI target

    Multiattach does not yet work with the LIO iSCSI target.

    Related-Bug: #1786327

    Change-Id: I84f607de13bc17b00609ad37121d8678f7f4a920

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to cinder (stable/rocky)

Related fix proposed to branch: stable/rocky
Review: https://review.openstack.org/596493

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to cinder (stable/rocky)

Reviewed: https://review.openstack.org/596493
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=dd5a565c5ba587b0306bb29509293cf1b7c04bc3
Submitter: Zuul
Branch: stable/rocky

commit dd5a565c5ba587b0306bb29509293cf1b7c04bc3
Author: Eric Harney <email address hidden>
Date: Wed Aug 22 11:00:32 2018 -0400

    LVM: Disable multiattach for LIO iSCSI target

    Multiattach does not yet work with the LIO iSCSI target.

    Related-Bug: #1786327

    Change-Id: I84f607de13bc17b00609ad37121d8678f7f4a920
    (cherry picked from commit b4883db7c09d8254bfbe34669873919dbc008943)

tags: added: in-stable-rocky
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to cinder (stable/queens)

Related fix proposed to branch: stable/queens
Review: https://review.openstack.org/614278

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/616212

Changed in cinder:
assignee: nobody → Lee Yarwood (lyarwood)
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (stable/rocky)

Fix proposed to branch: stable/rocky
Review: https://review.openstack.org/618472

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.openstack.org/618473

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/616212
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=d0b59152ed413f39599ffd1554c0e4e0038431aa
Submitter: Zuul
Branch: master

commit d0b59152ed413f39599ffd1554c0e4e0038431aa
Author: Lee Yarwood <email address hidden>
Date: Wed Nov 7 14:26:26 2018 +0000

    lvm: Avoid premature calls to terminate_connection for muiltiattach vols

    Previously the LVM volume driver would always call termiante_connection
    on the configured target driver regardless of of the associated volume
    being attached to multiple instances on the same host. This isn't an
    issue for the tgt target driver used by upstream CI as this call is a
    noop but does cause the lioadm driver to prematurely remove specific
    host ACLs for volumes that are still being used.

    This change introduces a simple check to ensure that the target driver
    is only called when a single attachment remains that is in turn using
    the connector provided by the caller.

    Finally as a result of this bugfix the changes introduced by
    I84f607de13bc17b00609ad37121d8678f7f4a920 to disable the multiattach
    feature when using the lioadm target driver are removed.

    Closes-bug: #1786327
    Change-Id: Ib5aa1b7578f7d3200185566ff5f8634dd519d020

Changed in cinder:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to cinder (stable/queens)

Reviewed: https://review.openstack.org/614278
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=7503af11e6d00be718de054981fb5969fd0e4a5c
Submitter: Zuul
Branch: stable/queens

commit 7503af11e6d00be718de054981fb5969fd0e4a5c
Author: Eric Harney <email address hidden>
Date: Wed Aug 22 11:00:32 2018 -0400

    LVM: Disable multiattach for LIO iSCSI target

    Multiattach does not yet work with the LIO iSCSI target.

    Related-Bug: #1786327

    Change-Id: I84f607de13bc17b00609ad37121d8678f7f4a920
    (cherry picked from commit b4883db7c09d8254bfbe34669873919dbc008943)
    (cherry picked from commit dd5a565c5ba587b0306bb29509293cf1b7c04bc3)

tags: added: in-stable-queens
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (stable/rocky)

Reviewed: https://review.openstack.org/618472
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=3bec6d69a8ef90b3d3582f73be7968e0adfad23a
Submitter: Zuul
Branch: stable/rocky

commit 3bec6d69a8ef90b3d3582f73be7968e0adfad23a
Author: Lee Yarwood <email address hidden>
Date: Wed Nov 7 14:26:26 2018 +0000

    lvm: Avoid premature calls to terminate_connection for muiltiattach vols

    Previously the LVM volume driver would always call termiante_connection
    on the configured target driver regardless of of the associated volume
    being attached to multiple instances on the same host. This isn't an
    issue for the tgt target driver used by upstream CI as this call is a
    noop but does cause the lioadm driver to prematurely remove specific
    host ACLs for volumes that are still being used.

    This change introduces a simple check to ensure that the target driver
    is only called when a single attachment remains that is in turn using
    the connector provided by the caller.

    Finally as a result of this bugfix the changes introduced by
    I84f607de13bc17b00609ad37121d8678f7f4a920 to disable the multiattach
    feature when using the lioadm target driver are removed.

    Closes-bug: #1786327
    Change-Id: Ib5aa1b7578f7d3200185566ff5f8634dd519d020
    (cherry picked from commit d0b59152ed413f39599ffd1554c0e4e0038431aa)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (stable/queens)

Reviewed: https://review.openstack.org/618473
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=f72f3486269880d8a1822a703c37b6f04bd58356
Submitter: Zuul
Branch: stable/queens

commit f72f3486269880d8a1822a703c37b6f04bd58356
Author: Lee Yarwood <email address hidden>
Date: Wed Nov 7 14:26:26 2018 +0000

    lvm: Avoid premature calls to terminate_connection for muiltiattach vols

    Previously the LVM volume driver would always call termiante_connection
    on the configured target driver regardless of of the associated volume
    being attached to multiple instances on the same host. This isn't an
    issue for the tgt target driver used by upstream CI as this call is a
    noop but does cause the lioadm driver to prematurely remove specific
    host ACLs for volumes that are still being used.

    This change introduces a simple check to ensure that the target driver
    is only called when a single attachment remains that is in turn using
    the connector provided by the caller.

    Finally as a result of this bugfix the changes introduced by
    I84f607de13bc17b00609ad37121d8678f7f4a920 to disable the multiattach
    feature when using the lioadm target driver are removed.

    Closes-bug: #1786327
    Change-Id: Ib5aa1b7578f7d3200185566ff5f8634dd519d020
    (cherry picked from commit d0b59152ed413f39599ffd1554c0e4e0038431aa)
    (cherry picked from commit 3bec6d69a8ef90b3d3582f73be7968e0adfad23a)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 12.0.5

This issue was fixed in the openstack/cinder 12.0.5 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 13.0.3

This issue was fixed in the openstack/cinder 13.0.3 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 14.0.0.0rc1

This issue was fixed in the openstack/cinder 14.0.0.0rc1 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to cinder (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/cinder/+/836073

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to cinder (master)

Reviewed: https://review.opendev.org/c/openstack/cinder/+/836073
Committed: https://opendev.org/openstack/cinder/commit/daa803b8ee1adcadc7d0e797bd04b287a7df5946
Submitter: "Zuul (22348)"
Branch: master

commit daa803b8ee1adcadc7d0e797bd04b287a7df5946
Author: Gorka Eguileor <email address hidden>
Date: Tue Mar 8 21:37:21 2022 +0100

    LVM: terminate_connection fails if no initiator

    The LVM driver assumes that all connecting hosts will have the iSCSI
    initiator installed and configured. If they don't, then there won't be
    an "initiator" key in the connector properties dictionary and the call
    to terminate connection will always fail with a KeyError exception on
    the 'initiator' key.

    This is the case if we don't have iSCSI configured on the computes
    because we are only using NVMe-oF volumes with the nvmet target.

    This patch starts using the dictionary ``get`` method so there is no
    failure even when the keys don't exist, and it also differentiates by
    target type so they target the identifier they care about, which is the
    ``initiator`` for iSCSI and ``nqn`` for NVMe-oF.

    Closes-Bug: #1966513
    Related-Bug: #1786327
    Change-Id: Ie967a42188bd020178cb7af527e3dd3ab8975a3d

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.