Comment 3 for bug 1917750

Revision history for this message
Lee Yarwood (lyarwood) wrote :

I've reproduced the single WWN part on a Focal based multinode env, I've not yet reproduced the odd detach behaviour seen in the nova-live-migration job.

$ openstack server create --flavor 1 --image cirros-0.5.1-x86_64-disk --network private test

Two volumes are created in the two different backends:

$ openstack volume create --size 1 test-1
$ openstack volume create --size 1 test-2

stack@devstack-focal-ctrl:~/devstack$ sudo lvs | grep volume-
  volume-12e58e2e-41cb-46dd-b294-4feb836f9434 stack-volumes-lvmdriver-1 Vwi-aotz-- 1.00g stack-volumes-lvmdriver-1-pool 0.00

stack@devstack-focal-cpu:~/devstack$ sudo lvs | grep volume-
  volume-3e72097c-6281-4cbb-b676-337eb7e267d1 stack-volumes-lvmdriver-1 Vwi-aotz-- 1.00g stack-volumes-lvmdriver-1-pool 0.00

Attached to a test instance:

$ openstack server add volume test test-1
$ openstack server add volume test test-2

os-brick has told n-cpu to attach /dev/sd{b,c}:

$ sudo virsh domblklist 5f28ff81-93d6-48e9-bdca-7b67d3eea835
 Target Source
------------------------------------------------------------------------------------
 vda /opt/stack/data/nova/instances/5f28ff81-93d6-48e9-bdca-7b67d3eea835/disk
 vdb /dev/sdb
 vdc /dev/sdc

However we only see a single WWN under /dev/disk/by-id/wwn*:

$ ll /dev/disk/by-id/wwn-*
lrwxrwxrwx 1 root root 10 Mar 9 21:08 /dev/disk/by-id/wwn-0x60000000000000000e00000000010001 -> ../../dm-5

Oddly I also see the devices being picked up as path devices because of the duplicate WWN?!

$ lsblk
[..]
sdb 8:16 0 1G 0 disk
└─mpatha 253:5 0 1G 0 mpath
sdc 8:32 0 1G 0 disk
└─mpatha 253:5 0 1G 0 mpath

I didn't configure multipathd on this deployment but here it is configured and providing a mpath device:

$ sudo multipath -ll
mpatha (360000000000000000e00000000010001) dm-5 IET,VIRTUAL-DISK
size=1.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 7:0:0:1 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  `- 8:0:0:1 sdc 8:32 active ready running