[SRU] multipath iscsi does not logout of sessions on xenial

Bug #1623700 reported by Patrick East on 2016-09-14
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ubuntu Cloud Archive
Medium
Unassigned
Mitaka
Medium
Unassigned
Newton
Medium
Unassigned
os-brick
Undecided
Unassigned
python-os-brick (Ubuntu)
Medium
Unassigned
Xenial
Medium
Hua Zhang
Yakkety
Medium
Unassigned

Bug Description

[Impact]

 * multipath-tools has a bug that 'multipath -r' can cause /dev/mapper/<wwid> to be deleted and re-created momentarily (bug #1621340 is used to track this problem), thus os.stat(mdev) right after _rescan_multipath ('multipath -r') in os-brick can fail if it is executed before multipath dev being re-created. This will also lead to multipath iscsi does not logout of sessions on xenial.

[Test Case]

 * Enable cinder multipath by adding iscsi_ip_address and iscsi_secondary_ip_addresses in cinder.conf
 * Enable nova multipath by adding iscsi_use_multipath=True in [libvirt] secion of nova.conf
 * Detach a iSCSI volume
 * Check that devices/symlinks do not get messed up mentioned below, or check that multipath device /dev/mapper/<wwid> doesn't be deleted and re-created momentarily

[Regression Potential]

 * multipath-tools loads devices on its own, we shouldn't need to be forcing multipathd to do reload, so there is no regression potential.

stack@xenial-devstack-master-master-20160914-092014:~$ sudo iscsiadm -m session
tcp: [5] 10.0.1.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
tcp: [6] 10.0.5.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
tcp: [7] 10.0.1.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
tcp: [8] 10.0.5.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)

stack@xenial-devstack-master-master-20160914-092014:~$ sudo iscsiadm -m node
10.0.1.11:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873
10.0.5.11:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873
10.0.5.10:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873
10.0.1.10:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873

stack@xenial-devstack-master-master-20160914-092014:~$ sudo tail -f /var/log/syslog
Sep 14 22:33:14 xenial-qemu-tester multipath: dm-0: failed to get udev uid: Invalid argument
Sep 14 22:33:14 xenial-qemu-tester multipath: dm-0: failed to get sysfs uid: Invalid argument
Sep 14 22:33:14 xenial-qemu-tester multipath: dm-0: failed to get sgio uid: No such file or directory
Sep 14 22:33:14 xenial-qemu-tester systemd[1347]: dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device: Dev dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device appeared twice with different sysfs paths /sys/devices/platform/host6/session5/target6:0:0/6:0:0:1/block/sda and /sys/devices/virtual/block/dm-0
Sep 14 22:33:14 xenial-qemu-tester systemd[1347]: dev-disk-by\x2did-wwn\x2d0x624a93709a738ed78583fd12003fb774.device: Dev dev-disk-by\x2did-wwn\x2d0x624a93709a738ed78583fd12003fb774.device appeared twice with different sysfs paths /sys/devices/platform/host6/session5/target6:0:0/6:0:0:1/block/sda and /sys/devices/virtual/block/dm-0
Sep 14 22:33:14 xenial-qemu-tester systemd[1]: dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device: Dev dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device appeared twice with different sysfs paths /sys/devices/platform/host6/session5/target6:0:0/6:0:0:1/block/sda and /sys/devices/virtual/block/dm-0
Sep 14 22:33:14 xenial-qemu-tester systemd[1]: dev-disk-by\x2did-wwn\x2d0x624a93709a738ed78583fd12003fb774.device: Dev dev-disk-by\x2did-wwn\x2d0x624a93709a738ed78583fd12003fb774.device appeared twice with different sysfs paths /sys/devices/platform/host6/session5/target6:0:0/6:0:0:1/block/sda and /sys/devices/virtual/block/dm-0
Sep 14 22:33:14 xenial-qemu-tester kernel: [22362.163521] audit: type=1400 audit(1473892394.556:21): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-6e1017a7-6dea-418f-ad9b-879da085bd13" pid=32665 comm="apparmor_parser"
Sep 14 22:33:14 xenial-qemu-tester kernel: [22362.173614] audit: type=1400 audit(1473892394.568:22): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-6e1017a7-6dea-418f-ad9b-879da085bd13//qemu_bridge_helper" pid=32665 comm="apparmor_parser"
Sep 14 22:33:14 xenial-qemu-tester iscsid: Connection8:0 to [target: iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873, portal: 10.0.5.11,3260] through [iface: default] is operational now

stack@xenial-devstack-master-master-20160914-092014:~$ nova volume-detach 6e1017a7-6dea-418f-ad9b-879da085bd13 d1d68e04-a217-44ea-bb74-65e0de73e5f8

stack@xenial-devstack-master-master-20160914-092014:~$ sudo iscsiadm -m session
tcp: [5] 10.0.1.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
tcp: [6] 10.0.5.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
tcp: [7] 10.0.1.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
tcp: [8] 10.0.5.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)

stack@xenial-devstack-master-master-20160914-092014:~$ sudo tail -f /var/log/syslog
Sep 14 22:48:10 xenial-qemu-tester kernel: [23257.736455] connection6:0: detected conn error (1020)
Sep 14 22:48:13 xenial-qemu-tester kernel: [23260.742036] connection5:0: detected conn error (1020)
Sep 14 22:48:13 xenial-qemu-tester kernel: [23260.742066] connection7:0: detected conn error (1020)
Sep 14 22:48:13 xenial-qemu-tester kernel: [23260.742139] connection8:0: detected conn error (1020)
Sep 14 22:48:13 xenial-qemu-tester kernel: [23260.742156] connection6:0: detected conn error (1020)
Sep 14 22:48:16 xenial-qemu-tester kernel: [23263.747638] connection5:0: detected conn error (1020)
Sep 14 22:48:16 xenial-qemu-tester kernel: [23263.747666] connection7:0: detected conn error (1020)
Sep 14 22:48:16 xenial-qemu-tester kernel: [23263.747710] connection8:0: detected conn error (1020)
Sep 14 22:48:16 xenial-qemu-tester kernel: [23263.747737] connection6:0: detected conn error (1020)
Sep 14 22:48:16 xenial-qemu-tester iscsid: message repeated 67 times: [ conn 0 login rejected: initiator failed authorization with target]
Sep 14 22:48:19 xenial-qemu-tester kernel: [23266.753999] connection6:0: detected conn error (1020)
Sep 14 22:48:19 xenial-qemu-tester kernel: [23266.754019] connection8:0: detected conn error (1020)
Sep 14 22:48:19 xenial-qemu-tester kernel: [23266.754105] connection5:0: detected conn error (1020)
Sep 14 22:48:19 xenial-qemu-tester kernel: [23266.754146] connection7:0: detected conn error (1020)

Reviewed: https://review.openstack.org/374421
Committed: https://git.openstack.org/cgit/openstack/os-brick/commit/?id=e591bc78cc01c1171060dc15399a46ff800b49c3
Submitter: Jenkins
Branch: master

commit e591bc78cc01c1171060dc15399a46ff800b49c3
Author: Patrick East <email address hidden>
Date: Wed Sep 21 15:05:48 2016 -0700

    Stop calling multipath -r when attaching/detaching iSCSI volumes

    Looking into this more there isn't any documented reason why we do this,
    and on Ubuntu 16.04 there are issues with timing and devices/symlinks
    getting messed up when we do the reload of device maps. We shouldn't
    need to be forcing multipathd to do this, it loads devices on its own.

    We'll leave in the one in 'wait_for_rw(..)' for now because there is
    some evidence that you may need to call it to update the rw state of
    the multipath devices, see:
    https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise
    _Linux/6/html/Storage_Administration_Guide/ch37s04s02.html

    Change-Id: Iec58284abdc9bcbf99df5d07289bb9d60a3554d7
    Closes-Bug: #1623700

Changed in os-brick:
status: New → Fix Released

This issue was fixed in the openstack/os-brick 1.8.0 release.

Hua Zhang (zhhuabj) wrote :
summary: - multipath iscsi does not logout of sessions on xenial
+ [SRU] multipath iscsi does not logout of sessions on xenial
description: updated
tags: added: sts-sru
Hua Zhang (zhhuabj) on 2017-01-25
Changed in python-os-brick (Ubuntu):
status: New → Invalid
no longer affects: python-os-brick (Ubuntu)
Download full text (10.2 KiB)

Hi, testing this fix, we found that in certain scenarios, if we don't call _rescan_multipath when attaching/detaching iSCSI volumes, the iSCSI sessions are not logged out. Our current workaround is doing a double _rescan_multipath in those places.

Tested with Netapp e-series 5524 / xenial /mitaka

Perhatps the rescans are there for a reason...

WITHOUT RESCAN:

Jan 25 09:24:40 calling os-brick to detach iSCSI Volume
Jan 25 09:24:40 Lock "connect_volume" acquired by "os_brick.initiator.connector.disconnect_volume" :: waited 0.000s
Jan 25 09:24:40 multipath -ll /dev/sdr
Jan 25 09:24:40 multipath -ll /dev/sdr" returned: 0 in 0.218s
Jan 25 09:24:40 multipath ['-ll', u'/dev/sdr']: stdout=360080e5000297ea40000050658885f45 dm-6 NETAPP,INF-01-00#012size=10G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw#012|-+- policy='
service-time 0' prio=14 status=active#012| |- 11:0:0:135 sdr 65:16 active ready running#012| |- 3:0:0:135 sds 65:32 active ready running#012| |- 8:0:0:135 sdq 65:0 active ready running#012| `- 9:0:0:135 sdt 65:48 active ready runni
ng#012`-+- policy='service-time 0' prio=9 status=enabled#012 |- 10:0:0:135 sdo 8:224 active ready running#012 |- 10:0:0:162 sdi 8:128 active faulty running#012 |- 11:0:0:162 sdj 8:144 active faulty running#012 |- 12:0:0:135 sdn 8:208
active ready running#012 |- 12:0:0:162 sdk 8:160 active faulty running#012 |- 13:0:0:135 sdp 8:240 active ready running#012 |- 13:0:0:162 sdl 8:176 active faulty running#012 |- 3:0:0:162 sdb 8:16 active faulty running#012 |- 4:0:0
:135 sdm 8:192 active ready running#012 |- 4:0:0:162 sdc 8:32 active faulty running#012 |- 8:0:0:162 sdg 8:96 active faulty running#012 `- 9:0:0:162 sdh 8:112 active faulty running#012 stderr=
Jan 25 09:24:40 remove multipath device /dev/sdr
Jan 25 09:24:40 multipath -l /dev/sdr
Jan 25 09:24:40 multipath -l /dev/sdr" returned: 0 in 0.176s
Jan 25 09:24:40 Couldn't find multipath device /dev/mapper/360080e5000297ea40000050658885f45
Jan 25 09:24:40 Disconnect multipath device /dev/mapper/360080e5000297ea40000050658885f45
Jan 25 09:24:40 multipath -ll
Jan 25 09:24:41 multipath -ll" returned: 0 in 0.248s
Jan 25 09:24:41 multipath ['-ll']: stdout=3624a9370dbcfee6048974fe300011180 dm-1 ##,###012size=1.0G features='0' hwhandler='0' wp=rw#012360080e5000297ea40000050658885f45 dm-6 NETAPP,INF-01-00#012size=10G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw#012|-+- policy='service-time 0' prio=14 status=active#012| |- 11:0:0:135 sdr 65:16 active ready running#012| |- 3:0:0:135 sds 65:32 active ready running#012| |- 8:0:0:135 sdq 65:0 active ready running#012| `- 9:0:0:135 sdt 65:48 active ready running#012`-+- policy='service-time 0' prio=9 status=enabled#012 |- 10:0:0:135 sdo 8:224 active ready running#012 |- 10:0:0:162 sdi 8:128 active faulty running#012 |- 11:0:0:162 sdj 8:144 active faulty running#012 |- 12:0:0:135 sdn 8:208 active ready running#012 |- 12:0:0:162 sdk 8:160 active faulty running#012 |- 13:0:0:135 sdp 8:240 active ready running#012 |- 13:0:0:162 sdl 8:176 active faulty running#012 |- 3:0:0...

Hua Zhang (zhhuabj) on 2017-01-26
Changed in python-os-brick (Ubuntu):
status: New → Invalid
Changed in python-os-brick (Ubuntu Xenial):
assignee: nobody → Hua Zhang (zhhuabj)
Changed in python-os-brick (Ubuntu):
status: Invalid → Fix Released
Changed in python-os-brick (Ubuntu Xenial):
status: New → In Progress
Hua Zhang (zhhuabj) wrote :

anyone can help sponsor this sru ? thank a lot

Changed in python-os-brick (Ubuntu):
importance: Undecided → Medium
Changed in python-os-brick (Ubuntu Xenial):
importance: Undecided → Medium
tags: added: sts sts-sponsor ubuntu-sponsors
Hua Zhang (zhhuabj) wrote :
Corey Bryant (corey.bryant) wrote :

Hua, Can you update the regression potential for the Ubuntu SRU details please? 'None' isn't really a good answer, especially with Gustavo's comment above. Can we check if any other patches have landed upstream wrt his comment?

Changed in cloud-archive:
status: New → Fix Released
importance: Undecided → Medium
Corey Bryant (corey.bryant) wrote :

This is going to need to land in yakkety and newton before it gets backported to xenial. I've initiated the upstream backport to newton:
https://review.openstack.org/#/c/447018/

Corey Bryant (corey.bryant) wrote :

Can't seem to target this to yakkety atm, launchpad is erroring for me.

Louis Bouchard (louis) on 2017-03-17
Changed in python-os-brick (Ubuntu Yakkety):
importance: Undecided → Medium
status: New → Triaged
Hua Zhang (zhhuabj) wrote :
Download full text (5.2 KiB)

@Gustavo,

I can't reproduce your problem, this patch works well for me. I did two experiments.

One experiment is WITHOUT multipath-tools patch [1] + WITHOUT this os-brick patch, the test result can refer the link [2], we can see that multipath device can be deleted by 'multipath -r'.

Mar 22 10:33:52 juju-zhhuabj-machine-9 nova-compute[22305]: 2017-03-22 10:33:52.233 22305 WARNING os_brick.initiator.linuxscsi [req-0536068d-110c-43a4-82e4-941cdb715042 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] Couldn't find multipath device /dev/mapper/360000000000000000e00000000010001

Another experiment is WITHOUT multipath-tools patch [2] + WITH this os-brick patch, the test result can refer the link [3], we can see that multipath device doesn't be delete because this os-brick patch has removed 'multipath -r'.

Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.520 17329 DEBUG oslo_concurrency.lockutils [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] Lock "connect_volume" acquired by "os_brick.initiator.connector.disconnect_volume" :: waited 0.001s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.523 17329 DEBUG oslo_concurrency.processutils [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -ll /dev/sda execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:344
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.705 17329 DEBUG oslo_concurrency.processutils [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -ll /dev/sda" returned: 0 in 0.183s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:374
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.707 17329 DEBUG os_brick.initiator.connector [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] multipath ['-ll', u'/dev/sda']: stdout=360000000000000000e00000000010001 dm-0 IET,VIRTUAL-DISK
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: size=1.0G features='0' hwhandler='0' wp=rw
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: |-+- policy='round-robin 0' prio=1 status=active
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: | `- 4:0:0:1 sda 8:0 active ready running
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: `-+- policy='round-robin 0' prio=1 status=enabled
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: `- 5:0:0:1 sdb 8:16 active ready running
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: stderr= _run_multipath /usr/lib/python2.7/dist-packages/os_brick/initiator/connector.py:1286
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.709 17329 DEBUG os_brick.i...

Read more...

Hua Zhang (zhhuabj) wrote :

@Gustavo,

Liang's comment in patch set 3 [1] can explain your problem, he said:

The dev can disappear momentarily right after 'multipath -r "dev"'. So it doesn't happen for every single path. If it did, it would cause a lot more issues. The multipath dev removal path reloads the dev near the beginning of the operation (_rescan_multipath). Thus the "stat" here can fail if it is executed before the dev node being re-created.

1, 'multipath -r' in _rescan_multipath() [10] can make the multipath dev disappear momentarily [9] due to the bug [9].

2, _get_multipath_device_name() [2] uses 'multipath -ll' command to find multipath device name [3], so we saw:

Jan 25 09:24:40 Lock "connect_volume" acquired by "os_brick.initiator.connector.disconnect_volume" :: waited 0.000s
...
Jan 25 09:24:40 multipath ['-ll', u'/dev/sdr']: stdout=360080e5000297ea40000050658885f45 dm-6 NETAPP,INF-01-00#012

3, _linuxscsi.remove_multipath_device() [4] will invoke remove_multipath_device() [4], so we saw:

Jan 25 09:24:40 remove multipath device /dev/sdr'

4, then find_multipath_device() will be invoked [5], then 'multipath -l' will be invoked [6]

5, the "stat" right after 'multipath -r' here [7] can fail if it is executed before the dev node being re-created. so we saw:

Jan 25 09:24:40 Couldn't find multipath device /dev/mapper/360080e5000297ea40000050658885f45

So the fix [1] was trying to fix this problem, but it was abandoned later because we already have the fix [8], that's also why I am trying to backport it.
FYI, the root cause of your problem is a bug in multipath-tools [9], you can also fix the problem by upgrading multipath-tools.

[1] https://review.openstack.org/#/c/366065
[2] https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/connector.py#L925
[3] https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/connector.py#L1200
[4] https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/connector.py#L935
[5] https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/linuxscsi.py#L124
[6] https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/linuxscsi.py#L263
[7] https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/linuxscsi.py#L288
[8] https://review.openstack.org/#/c/374421/
[9] https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1621340
[10] https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/connector.py#L918

Hua Zhang (zhhuabj) wrote :
description: updated
Download full text (12.9 KiB)

Hi Joshua, I'll check this and give you feedback. Thanks!

On Wed, 22 Mar 2017 at 09:31, Hua Zhang <email address hidden> wrote:

> @Gustavo,
>
> Liang's comment in patch set 3 [1] can explain your problem, he said:
>
> The dev can disappear momentarily right after 'multipath -r "dev"'. So
> it doesn't happen for every single path. If it did, it would cause a lot
> more issues. The multipath dev removal path reloads the dev near the
> beginning of the operation (_rescan_multipath). Thus the "stat" here can
> fail if it is executed before the dev node being re-created.
>
> 1, 'multipath -r' in _rescan_multipath() [10] can make the multipath dev
> disappear momentarily [9] due to the bug [9].
>
> 2, _get_multipath_device_name() [2] uses 'multipath -ll' command to find
> multipath device name [3], so we saw:
>
> Jan 25 09:24:40 Lock "connect_volume" acquired by
> "os_brick.initiator.connector.disconnect_volume" :: waited 0.000s
> ...
> Jan 25 09:24:40 multipath ['-ll', u'/dev/sdr']:
> stdout=360080e5000297ea40000050658885f45 dm-6 NETAPP,INF-01-00#012
>
> 3, _linuxscsi.remove_multipath_device() [4] will invoke
> remove_multipath_device() [4], so we saw:
>
> Jan 25 09:24:40 remove multipath device /dev/sdr'
>
> 4, then find_multipath_device() will be invoked [5], then 'multipath -l'
> will be invoked [6]
>
> 5, the "stat" right after 'multipath -r' here [7] can fail if it is
> executed before the dev node being re-created. so we saw:
>
> Jan 25 09:24:40 Couldn't find multipath device
> /dev/mapper/360080e5000297ea40000050658885f45
>
> So the fix [1] was trying to fix this problem, but it was abandoned later
> because we already have the fix [8], that's also why I am trying to
> backport it.
> FYI, the root cause of your problem is a bug in multipath-tools [9], you
> can also fix the problem by upgrading multipath-tools.
>
> [1] https://review.openstack.org/#/c/366065
> [2]
> https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/connector.py#L925
> [3]
> https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/connector.py#L1200
> [4]
> https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/connector.py#L935
> [5]
> https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/linuxscsi.py#L124
> [6]
> https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/linuxscsi.py#L263
> [7]
> https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/linuxscsi.py#L288
> [8] https://review.openstack.org/#/c/374421/
> [9] https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1621340
> [10]
> https://github.com/openstack/os-brick/blob/stable/mitaka/os_brick/initiator/connector.py#L918
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1623700
>
> Title:
> [SRU] multipath iscsi does not logout of sessions on xenial
>
> Status in Ubuntu Cloud Archive:
> Fix Released
> Status in Ubuntu Cloud Archive mitaka series:
> Triaged
> Status in Ubuntu Cloud Archive newton series:
> Triaged
> Status in os-brick:
> Fix Released
> Status in python-os-brick package ...

tags: added: sts-sru-needed
removed: sts-sru ubuntu-sponsors

Hi Joshua, I've done some tests and it's OK!
Thank you

Michael Terry (mterry) wrote :

Thanks zhhuabj and gustavo! I've uploaded the patches to yakkety and xenial.

tags: removed: sts-sponsor

Hello Patrick, or anyone else affected,

Accepted python-os-brick into xenial-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/python-os-brick/1.2.0-2ubuntu0.3 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in python-os-brick (Ubuntu Xenial):
status: In Progress → Fix Committed
tags: added: verification-needed
Changed in python-os-brick (Ubuntu Yakkety):
status: Triaged → Fix Committed
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted python-os-brick into yakkety-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/python-os-brick/1.6.1-0ubuntu1.2 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Hua Zhang (zhhuabj) wrote :

Verified successfully on both xenial-proposed and yakkety-proposed. http://paste.ubuntu.com/24365563/

tags: added: verification-xenial-done verification-yakkety-done
removed: verification-needed
Hua Zhang (zhhuabj) wrote :

It is worth mentioning that just now our customer also give me feedback that the package in xenial-proposed solved their problem. thanks.

James Page (james-page) wrote :

Hello Patrick, or anyone else affected,

Accepted python-os-brick into mitaka-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:mitaka-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-mitaka-needed to verification-mitaka-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-mitaka-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-done
removed: verification-xenial-done verification-yakkety-done
tags: added: verification-mitaka-needed
James Page (james-page) wrote :

Hello Patrick, or anyone else affected,

Accepted python-os-brick into newton-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:newton-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-newton-needed to verification-newton-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-newton-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-newton-needed
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package python-os-brick - 1.2.0-2ubuntu0.3

---------------
python-os-brick (1.2.0-2ubuntu0.3) xenial; urgency=medium

  * d/p/Stop-calling-multipath-r-when-attaching-detaching-iS.patch: Backport
    Backport fix for stopping calling 'multipath -r' (LP: #1623700)

 -- Zhang Hua <email address hidden> Fri, 17 Mar 2017 19:23:13 +0800

Changed in python-os-brick (Ubuntu Xenial):
status: Fix Committed → Fix Released

The verification of the Stable Release Update for python-os-brick has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Launchpad Janitor (janitor) wrote :

This bug was fixed in the package python-os-brick - 1.6.1-0ubuntu1.2

---------------
python-os-brick (1.6.1-0ubuntu1.2) yakkety; urgency=medium

  * d/p/stop-calling-multipath-r-when-attaching-detaching-iS.patch
   Backport fix for stopping calling 'multipath -r' (LP: #1623700)

 -- Hua Zhang <email address hidden> Wed, 22 Mar 2017 18:04:15 +0800

Changed in python-os-brick (Ubuntu Yakkety):
status: Fix Committed → Fix Released
Hua Zhang (zhhuabj) wrote :

I've verified that the new package (1.6.1-0ubuntu1.2~cloud0 and 1.2.0-2ubuntu0.3~cloud0) doesn't break anything and fix the problem. see https://pastebin.canonical.com/186800/ and https://pastebin.canonical.com/186797/

tags: added: verification-mitaka-done verification-newton-done
removed: verification-mitaka-needed verification-newton-needed
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers