The LVM driver indicates that multiattach is supported, but this depends on which iSCSI target is being used. It does not work with the LIO target.
This is because the LIO target tracks iSCSI authentication credentials on a per-hypervisor basis, which no longer works correctly if the same volume is attached twice on the same hypervisor.
$ export OS_VOLUME_API_VERSION=3.50
$ cinder create 1 <snipped>
+--------------------------------+----------------------------------------------+
| Property | Value |
+--------------------------------+----------------------------------------------+
| id | 8bd0f068-5361-45e2-8e80-391ccb72310a |
| multiattach | True |
+--------------------------------+----------------------------------------------+
$ nova boot --image cirros-0.3.5-x86_64-disk --flavor m1.nano vm1
$ nova boot --image cirros-0.3.5-x86_64-disk --flavor m1.nano vm2
$ nova list
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------+
| 21fd6622-b919-4ddf-bfd4-3d63a8627843 | vm1 | ACTIVE | - | Running | private=10.0.0.3, fd7b:5e5c:7b68:0:f816:3eff:fec6:16ff |
| 43c1e472-9b50-4241-989e-be6b7f0e4e09 | vm2 | ACTIVE | - | Running | private=10.0.0.8, fd7b:5e5c:7b68:0:f816:3eff:fe31:1f8d |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------+
$ sudo targetcli ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 0]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 0]
o- loopback ......................................................................................................... [Targets: 0]
o- vhost ............................................................................................................ [Targets: 0]
$ cinder show 8bd0f068-5361-45e2-8e80-391ccb72310a <snipped>
+--------------------------------+----------------------------------------------+
| Property | Value |
+--------------------------------+----------------------------------------------+
| attached_servers | ['21fd6622-b919-4ddf-bfd4-3d63a8627843'] |
| attachment_ids | ['bb8e3f48-020c-434d-a8b6-8ddd274def16'] |
| id | 8bd0f068-5361-45e2-8e80-391ccb72310a |
| multiattach | True |
| status | in-use |
$ sudo targetcli ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 1]
| | o- iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a [/dev/stack-volumes-lvmdriver-1/volume-8bd0f068-5361-45e2-8e80-391ccb72310a (1.0GiB) write-thru activated]
| | o- alua ................................................................................................... [ALUA Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 1]
| o- iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a ............................................. [TPGs: 1]
| o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
| o- acls .......................................................................................................... [ACLs: 1]
| | o- iqn.1994-05.com.redhat:ca93ebd8c187 ...................................................... [1-way auth, Mapped LUNs: 1]
| | o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a (rw)]
| o- luns .......................................................................................................... [LUNs: 1]
| | o- lun0 [block/iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a (/dev/stack-volumes-lvmdriver-1/volume-8bd0f068-5361-45e2-8e80-391ccb72310a) (default_tg_pt_gp)]
| o- portals .................................................................................................... [Portals: 1]
| o- 10.16.149.255:3260 ............................................................................................... [OK]
o- loopback ......................................................................................................... [Targets: 0]
o- vhost ............................................................................................................ [Targets: 0]
$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+
| 8bd0f068-5361-45e2-8e80-391ccb72310a | in-use | - | 1 | lvmdriver-1 | false | 21fd6622-b919-4ddf-bfd4-3d63a8627843,43c1e472-9b50-4241-989e-be6b7f0e4e09 |
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+
$ targetcli ls is the same as above (same n-cpu node, because same instance used)
$ nova volume-detach vm2 8bd0f068-5361-45e2-8e80-391ccb72310a
$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| 8bd0f068-5361-45e2-8e80-391ccb72310a | in-use | - | 1 | lvmdriver-1 | false | 21fd6622-b919-4ddf-bfd4-3d63a8627843 |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
$ sudo targetcli ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 1]
| | o- iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a [/dev/stack-volumes-lvmdriver-1/volume-8bd0f068-5361-45e2-8e80-391ccb72310a (1.0GiB) write-thru activated]
| | o- alua ................................................................................................... [ALUA Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 1]
| o- iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a ............................................. [TPGs: 1]
| o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
| o- acls .......................................................................................................... [ACLs: 0]
| o- luns .......................................................................................................... [LUNs: 1]
| | o- lun0 [block/iqn.2010-10.org.openstack:volume-8bd0f068-5361-45e2-8e80-391ccb72310a (/dev/stack-volumes-lvmdriver-1/volume-8bd0f068-5361-45e2-8e80-391ccb72310a) (default_tg_pt_gp)]
| o- portals .................................................................................................... [Portals: 1]
| o- 10.16.149.255:3260 ............................................................................................... [OK]
o- loopback ......................................................................................................... [Targets: 0]
o- vhost ............................................................................................................ [Targets: 0]
^ no more mapped_lun, auth removed
$ nova volume-detach vm1 8bd0f068-5361-45e2-8e80-391ccb72310a
<hangs with volume in "detaching" state for a while>
Nova compute error:
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __e>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server self.force_reraise()
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in for>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/block_device.py", line 314, in driver_detach
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server encryption=encryption)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1621, in detach_volume
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server encryption=encryption)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1264, in _disconnect_vo>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server vol_driver.disconnect_volume(connection_info, instance)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/volume/iscsi.py", line 74, in disconnect>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server self.connector.disconnect_volume(connection_info['data'], None)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/utils.py", line 150, in trace_logging_wrapper
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server result = f(*args, **kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274,>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server return f(*args, **kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/connectors/iscsi.py", line 848, in >
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server ignore_errors=ignore_errors)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/connectors/iscsi.py", line 885, in >
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server force, exc)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/linuxscsi.py", line 223, in remove_>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server self.remove_scsi_device('/dev/' + device_name, force, exc)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/linuxscsi.py", line 70, in remove_s>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server self.flush_device_io(device)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/initiator/linuxscsi.py", line 262, in flush_d>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server interval=10, root_helper=self._root_helper)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/executor.py", line 52, in _execute
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server result = self.__execute(*args, **kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/opt/stack/os-brick/os_brick/privileged/rootwrap.py", line 169, in execute
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server return execute_root(*cmd, **kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 207, >
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server return self.channel.remote_call(name, args, kwargs)
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in rem>
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server raise exc_type(*result[2])
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server ProcessExecutionError: Unexpected error while running command.
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server Command: blockdev --flushbufs /dev/sdb
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server Exit code: 1
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server Stdout: u''
Aug 09 14:06:45 fedora28.localdomain nova-compute[29260]: ERROR oslo_messaging.rpc.server Stderr: u'blockdev: cannot open /dev/sdb: No such device or address\n'
Volume reverts to "in-use" state:
$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| 8bd0f068-5361-45e2-8e80-391ccb72310a | in-use | - | 1 | lvmdriver-1 | false | 21fd6622-b919-4ddf-bfd4-3d63a8627843 |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
Kernel reports iscsi auth errors:
Aug 09 14:06:46 fedora28.localdomain kernel: Unable to locate Target IQN: iqn.2010-10.org.openstack:volume-cc694275-c9be-4714-aa74-012d4d472ff4 in Storage Node
Aug 09 14:06:46 fedora28.localdomain kernel: iSCSI Login negotiation failed.
Aug 09 14:06:46 fedora28.localdomain kernel: iSCSI Initiator Node: iqn.1994-05.com.redhat:ca93ebd8c187 is not authorized to access iSCSI target portal group: 1.
Aug 09 14:06:46 fedora28.localdomain kernel: iSCSI Login negotiation failed.
Device is dead:
$ ls -l /dev/sdb
brw-rw----. 1 root disk 8, 16 Aug 9 14:01 /dev/sdb
$ sudo xxd /dev/sdb
xxd: /dev/sdb: No such device or address
Reproduce the issue. openstack env was created by devstack with allinone mode.
OS: centos 7.4.1708 kvm-ev- 2.9.0-16. el7_4.14. 1)
Kernel: 3.10.0-693.2.2
libvirt: 3.9.0
qemu: 2.9.0(qemu-
OpenStack: master(rocky)
Steps as followed: ------- ------- ------- ------- ----+-- ----+-- ------+ ------- -----+- ------- -----+- ------- ------- -----+ ------- ------- ------- ------- ----+-- ----+-- ------+ ------- -----+- ------- -----+- ------- ------- -----+ 7971-442d- a286-47ab5b9ef9 65 | vm01 | ACTIVE | - | Running | public=172.24.4.3 | aeb4-4713- aba0-3dac659976 63 | vm02 | ACTIVE | - | Running | public=172.24.4.13 | ------- ------- ------- ------- ----+-- ----+-- ------+ ------- -----+- ------- -----+- ------- ------- -----+ ------- ------- ------- ------- ----+-- ------- --+---- ---+--- ---+--- ------- ---+--- ------- +------ ------- ------- ------- ------- ----+ ------- ------- ------- ------- ----+-- ------- --+---- ---+--- ---+--- ------- ---+--- ------- +------ ------- ------- ------- ------- ----+ f402-472a- b786-7f66b10353 52 | in-use | | 1 | multiattach | true | 23f54b0b- 7971-442d- a286-47ab5b9ef9 65 | a3a5-4688- 99c2-edc1a5e9ef bf | in-use | | 1 | multiattach | true | 887e5741- aeb4-4713- aba0-3dac659976 63 | 1652-45e8- 9e75-d0a581af00 c4 | available | vol01 | 1 | multiattach | false | | ------- ------- ------- ------- ----+-- ------- --+---- ---+--- ---+--- ------- ---+--- ------- +------ ------- ------- ------- ------- ----+ 1652-45e8- 9e75-d0a581af00 c4 1652-45e8- 9e75-d0a581af00 c4 ------- ------- ------- ------- ----+-- ------+ ------- +------ +------ ------- +------ ----+-- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ---+ ------- ------- ------- ------- ----+-- ------+ ------- +------ +------ ------- +------ ----+-- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ---+ f402-472a- b786-7f66b10353 52 | in-use | | 1 | multiattach | true | 23f54b0b- 7971-442d- a286-47ab5b9ef9 65 | a3a5-4688- 99c2-edc1a5e9ef bf | in-use | | 1 | multiattach | true | 887e5741- aeb4-4713- aba0-3dac659976 63 | 1652-45e8- 9e75-d0a581af00 c4 | in-use | vol01 | 1 | multiattach | false | 23f54b0b- 7971-442d- a286-47ab5b9ef9 65,887e5741- aeb4-4713- ab...
1. after env created by devstack, create a multiattach volume-type, add volume_backend_name and multiattach metadata for it
2. create two servers and one volume(with multiattach type)
3. nova list
+------
| ID | Name | Status | Task State | Power State | Networks |
+------
| 23f54b0b-
| 887e5741-
+------
cinder list
+------
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+------
| 0b019366-
| 90eacbe8-
| fb2b767f-
+------
4. nova --debug volume-attach vm01 fb2b767f-
5. nova --debug volume-attach vm02 fb2b767f-
6. after attach volume to two servers, check the volumes' status and iscsi
cinder list
+------
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+------
| 0b019366-
| 90eacbe8-
| fb2b767f-