multipath -ll doesn't discover down all paths on Emulex hosts

Bug #1860587 reported by Jennifer Duong
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
multipath-tools (Ubuntu)
Invalid
Undecided
Unassigned

Bug Description

root@ICTM1610S01H1:/opt/iop/usr/jduong# apt-cache show multipath-tools
Package: multipath-tools
Architecture: amd64
Version: 0.7.9-3ubuntu7
Priority: extra
Section: admin
Origin: Ubuntu
Maintainer: Ubuntu Developers <email address hidden>
Original-Maintainer: Debian DM Multipath Team <email address hidden>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 1141
Depends: libaio1 (>= 0.3.106-8), libc6 (>= 2.29), libdevmapper1.02.1 (>= 2:1.02.97), libjson-c4 (>= 0.13.1), libreadline8 (>= 6.0), libsystemd0, libudev1 (>= 183), liburcu6 (>= 0.11.1), udev (>> 136-1), kpartx (>= 0.7.9-3ubuntu7), lsb-base (>= 3.0-6), sg3-utils-udev
Suggests: multipath-tools-boot
Breaks: multipath-tools-boot (<= 0.4.8+git0.761c66f-2~), multipath-tools-initramfs (<= 1.0.1)
Filename: pool/main/m/multipath-tools/multipath-tools_0.7.9-3ubuntu7_amd64.deb
Size: 276004
MD5sum: ddf4c86498c054621e6aff07d9e71f84
SHA1: e6baf43104651d7346389e4edfa9363902a0ed62
SHA256: 858b9dd5c4597a20d9f44eb03f2b22a3835d14ced541bde59b74bbbcc568d7f9
Homepage: http://christophe.varoqui.free.fr/
Description-en: maintain multipath block device access
 These tools are in charge of maintaining the disk multipath device maps and
 react to path and map events.
 .
 If you install this package you may have to change the way you address block
 devices. See README.Debian for details.
Description-md5: d2b50f6d45021a3e6697180f992bb365
Task: server, cloud-image
Supported: 9m

root@ICTM1610S01H1:/opt/iop/usr/jduong# lsb_release -rd
Description: Ubuntu Focal Fossa (development branch)
Release: 20.04

root@ICTM1610S01H1:/opt/iop/usr/jduong# lsb_release -rd
Description: Ubuntu Focal Fossa (development branch)
Release: 20.04
root@ICTM1610S01H1:/opt/iop/usr/jduong# apt-cache policy multipath-tools
multipath-tools:
  Installed: 0.7.9-3ubuntu7
  Candidate: 0.7.9-3ubuntu7
  Version table:
 *** 0.7.9-3ubuntu7 500
        500 http://repomirror-ict.eng.netapp.com/ubuntu focal/main amd64 Packages
        100 /var/lib/dpkg/status

Both hosts have the following Emulex HBAs:

root@ICTM1610S01H1:/opt/iop/usr/jduong# cat /sys/class/fc_host/host*/symbolic_name
Emulex LPe16002B-M6 FV12.4.243.11 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe16002B-M6 FV12.4.243.11 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe32002-M2 FV12.4.243.17 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe32002-M2 FV12.4.243.17 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe35002-M2 FV12.4.243.23 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe35002-M2 FV12.4.243.23 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux

This is what I’m seeing when I run multipath -ll on my Emulex fabric-attached host:

root@ICTM1610S01H1:/opt/iop/usr/jduong# multipath -ll
3600a098000a0a2bc000075685ddf9fc5 dm-13 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:141 sdm 8:192 active ready running
3600a098000a0a2bc0000756f5ddf9ffb dm-24 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:151 sdw 65:96 active ready running
3600a098000a0a2bc0000756a5ddf9fd0 dm-11 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:143 sdo 8:224 active ready running
3600a098000a0a28a00009d785ddf9f0f dm-20 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:152 sdx 65:112 active ready running
3600a098000a0a28a00009d675ddf9eac dm-28 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:134 sdf 8:80 active ready running
3600a098000a0a28a00009d655ddf9ea1 dm-7 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:132 sdd 8:48 active ready running
3600a098000a0a2bc0000756c5ddf9fd9 dm-12 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:145 sdq 65:0 active ready running
3600a098000a0a2bc000075665ddf9fbb dm-1 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:139 sdk 8:160 active ready running
3600a098000a0a2bc000075645ddf9fb0 dm-3 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:137 sdi 8:128 active ready running

If I grep the LUN with lsscsi, it shows all 12 paths:

root@ICTM1610S01H1:/opt/iop/usr/jduong# lsscsi | grep 141
[11:0:0:141] disk NETAPP INF-01-00 8862 /dev/sdbs
[11:0:1:141] disk NETAPP INF-01-00 8862 /dev/sddy
[12:0:0:141] disk NETAPP INF-01-00 8862 /dev/sdm
[12:0:1:141] disk NETAPP INF-01-00 8862 /dev/sdap
[13:0:0:141] disk NETAPP INF-01-00 8862 /dev/sdcy
[13:0:1:141] disk NETAPP INF-01-00 8862 /dev/sdet
[14:0:0:141] disk NETAPP INF-01-00 8862 /dev/sdge
[14:0:1:141] disk NETAPP INF-01-00 8862 /dev/sdhh
[15:0:0:141] disk NETAPP INF-01-00 8862 /dev/sdik
[15:0:1:141] disk NETAPP INF-01-00 8862 /dev/sdjn
[16:0:0:141] disk NETAPP INF-01-00 8862 /dev/sdkq
[16:0:1:141] disk NETAPP INF-01-00 8862 /dev/sdlt

And this is what I’m seeing on my Emulex direct-connect host:

root@ICTM1610S01H3:~# multipath -ll
3600a098000a0a28a00009dbc5ddfa08b dm-26 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:156 sdab 65:176 active ready running
| `- 13:0:0:156 sdbe 67:128 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 14:0:0:156 sdch 69:80 active ready running
  `- 15:0:0:156 sddk 71:32 active ready running
3600a098000a0a2bc000075b75ddfa185 dm-27 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 15:0:0:157 sddl 71:48 active ready running
| `- 14:0:0:157 sdci 69:96 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:157 sdac 65:192 active ready running
  `- 13:0:0:157 sdbf 67:144 active ready running
3600a098000a0a2bc000075a65ddfa129 dm-14 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 15:0:0:143 sdcx 70:80 active ready running
| `- 14:0:0:143 sdbu 68:128 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:143 sdo 8:224 active ready running
  `- 13:0:0:143 sdar 66:176 active ready running
3600a098000a0a2bc000075a25ddfa10f dm-3 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:139 sdk 8:160 active ready running
3600a098000a0a28a00009db45ddfa06f dm-22 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:152 sdx 65:112 active ready running
| `- 13:0:0:152 sdba 67:64 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 14:0:0:152 sdcd 69:16 active ready running
  `- 15:0:0:152 sddg 70:224 active ready running
3600a098000a0a28a00009dab5ddfa02e dm-13 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:142 sdn 8:208 active ready running
| `- 13:0:0:142 sdaq 66:160 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 14:0:0:142 sdbt 68:112 active ready running
  `- 15:0:0:142 sdcw 70:64 active ready running
3600a098000a0a2bc000075a05ddfa103 dm-4 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:137 sdi 8:128 active ready running
3600a098000a0a28a00009da75ddfa016 dm-11 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:138 sdj 8:144 active ready running
| `- 13:0:0:138 sdam 66:96 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 14:0:0:138 sdbp 68:48 active ready running
  `- 15:0:0:138 sdcs 70:0 active ready running
3600a098000a0a2bc0000759c5ddfa0e9 dm-8 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:133 sde 8:64 active ready running

If I grep a LUN (that multipath -ll shows has only one path) with lsscsi, it shows all 4 paths:

root@ICTM1610S01H3:~# lsscsi | grep 139
[11:0:0:139] disk NETAPP INF-01-00 8862 /dev/sdk
[13:0:0:139] disk NETAPP INF-01-00 8862 /dev/sdan
[14:0:0:139] disk NETAPP INF-01-00 8862 /dev/sdbq
[15:0:0:139] disk NETAPP INF-01-00 8862 /dev/sdct

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: multipath-tools 0.7.9-3ubuntu7
ProcVersionSignature: Ubuntu 5.4.0-9.12-generic 5.4.3
Uname: Linux 5.4.0-9-generic x86_64
ApportVersion: 2.20.11-0ubuntu15
Architecture: amd64
Date: Wed Jan 22 11:11:23 2020
InstallationDate: Installed on 2020-01-15 (6 days ago)
InstallationMedia: Ubuntu-Server 20.04 LTS "Focal Fossa" - Alpha amd64 (20200107)
ProcEnviron:
 TERM=xterm
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=<set>
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: multipath-tools
UpgradeStatus: No upgrade log present (probably fresh install)
modified.conffile..etc.multipath.conf:

mtime.conffile..etc.multipath.conf: 2020-01-15T20:56:03.616143

Revision history for this message
Jennifer Duong (jduong) wrote :
Revision history for this message
Jennifer Duong (jduong) wrote :

apport.multipath-tools.z4yugdhx.apport

Paride Legovini (paride)
tags: added: server-triage-discuss
Revision history for this message
Paride Legovini (paride) wrote :

Hello Jennifer, thanks for filing this bug report. One question: did you happen to test any previous release of Ubuntu on the same hardware? If you did, how did previous versions behave? This is important for us to understand if a regression was introduced in any of the components involved with multipath support, or if this was how Ubuntu behaved already.

While waiting for more information I'm setting the status of this report to Incomplete. Please change it back to New after replying, and we'll look at it again. Thanks!

tags: removed: server-triage-discuss
Revision history for this message
Jennifer Duong (jduong) wrote :

Paride, we have support for Ubuntu 18.04 but that was running with older Qlogic and Emulex HBAs. I would assume if we have support for those solutions, that multipath did not behave like how it is now.

Revision history for this message
Mauricio Faria de Oliveira (mfo) wrote :

Hi Jennifer,

Could you please run this command, and upload its output file (multipath-d-v3.log) to this LP bug?
It's running in dry-run mode (-d), so it should not change/affect the current multipath status.
This should provide more detail on what multipath is (not) doing during path discovery.

# multipath -d -v3 > multipath-d-v3.log 2>&1

Thanks!

Revision history for this message
Jennifer Duong (jduong) wrote :
Revision history for this message
Mauricio Faria de Oliveira (mfo) wrote :

Jenifer,

Thanks for uploading the file.

That's a cool, relatively large topology -- if I got it right, you've got 3x 2-port FC adapters (thus 6 FC SCSI hosts) connected via switch/fabric to a 2-port storage (thus 12 paths per LUN.)

The good news is multipath has visibility of all 12 paths per LUN, so not a path detection/discovery issue.

Interestingly, it also has displays the current multipath devmaps table and status (from device mapper), and that *does* show 12 paths, in Active status, in 2 priority groups as expected (ALUA storage.)

Therefore, as the multipath devmaps appear to be configured correctly, this suggests the issue seems be on path listing (multipath -ll).
(The evidence that missing paths also happen on the direct-attached system, w/ 4 paths per LUN, possibly supports that, in the sense that this is not related to the attachment type, ie, fabric/direct.)

Could you please attach the output of these commands?

# multipath -ll -v3 > multipath-ll-v3.log 2>&1

# dmsetup table > dmsetup-table.log 2>&1
# dmsetup status > dmsetup-status.log 2>&1

Thanks,
Mauricio

Revision history for this message
Mauricio Faria de Oliveira (mfo) wrote :

Details mentioned in the previous comment:

$ grep 3600a098000a0a2bc000075685ddf9fc5 multipath-d-v3.log | grep -v udev
3600a098000a0a2bc000075685ddf9fc5 11:0:0:141 sdjl 8:496 10 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 11:0:1:141 sdlt 68:432 50 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 12:0:0:141 sdm 8:192 10 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 12:0:1:141 sdap 66:144 50 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 13:0:0:141 sdbs 68:96 10 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 13:0:1:141 sdcv 70:48 50 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 14:0:0:141 sdea 128:32 10 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 14:0:1:141 sdgf 131:176 50 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 15:0:0:141 sdev 129:112 50 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 15:0:1:141 sdhb 133:16 10 undef undef NET
3600a098000a0a2bc000075685ddf9fc5 16:0:0:141 sdik 135:64 10 undef undef NET
Jan 27 09:27:51 | 3600a098000a0a2bc000075685ddf9fc5: disassemble map [3 queue_if_no_path pg_init_retries 50 1 alua 2 1 service-time 0 6 2 68:432 1 1 66:144 1 1 70:48 1 1 131:176 1 1 129:112 1 1 66:496 1 1 service-time 0 6 2 8:496 1 1 8:192 1 1 68:96 1 1 128:32 1 1 133:16 1 1 135:64 1 1 ]
Jan 27 09:27:51 | 3600a098000a0a2bc000075685ddf9fc5: disassemble status [2 0 1 0 2 1 A 0 6 2 68:432 A 0 0 1 66:144 A 0 0 1 70:48 A 0 0 1 131:176 A 0 0 1 129:112 A 0 0 1 66:496 A 0 0 1 E 0 6 2 8:496 A 0 0 1 8:192 A 0 0 1 68:96 A 0 0 1 128:32 A 0 0 1 133:16 A 0 0 1 135:64 A 0 0 1 ]
3600a098000a0a2bc000075685ddf9fc5 16:0:1:141 sdkr 66:496 50 undef undef NET

Revision history for this message
Jennifer Duong (jduong) wrote :
Download full text (8.1 KiB)

Mauricio,

I should be seeing 12 paths for each LUN on my Emulex fabric-attached host and 4 paths for my Emulex direct-connect host. It looks like my Qlogic host fabric-attached host is now encountering this path listing issue too.

root@ICTM1610S01H2:~# multipath -ll
3600a098000a0a2bc000075865ddfa06a dm-8 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:141 sdm 8:192 active ready running
3600a098000a0a2bc0000758b5ddfa08e dm-5 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:147 sds 65:32 active ready running
3600a098000a0a28a00009d875ddf9f59 dm-14 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=10 status=active
  `- 11:0:0:136 sdh 8:112 active ready running
3600a098000a0a28a00009d8f5ddf9f88 dm-19 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=10 status=active
  `- 11:0:0:144 sdp 8:240 active ready running
3600a098000a0a2bc0000757c5ddfa02f dm-20 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:131 sdc 8:32 active ready running
3600a098000a0a2bc0000758c5ddfa09a dm-13 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:149 sdu 65:64 active ready running
3600a098000a0a28a00009d915ddf9fa1 dm-21 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=10 status=active
  `- 11:0:0:148 sdt 65:48 active ready running
3600a098000a0a2bc000075915ddfa0b6 dm-16 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:153 sdy 65:128 active ready running
3600a098000a0a28a00009d855ddf9f4e dm-1 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=10 status=active
  `- 11:0:0:134 sdf 8:80 active ready running
3600a098000a0a2bc000075845ddfa05e dm-10 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:139 sdk 8:160 active ready running
3600a098000a0a28a00009d895ddf9f66 dm-23 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:1:138 sdam 66:96 active ready running
| |- 12:0:1:138 sdeb 128:48 active ready running
| |- 13:0:1:138 sdey 129:160 active ready running
| `- 14:0:0:138 sdgb 131:112 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:138 sdj 8:144 active ready running
  |- 12:0:0:138 sdcj ...

Read more...

Revision history for this message
Jennifer Duong (jduong) wrote :
Revision history for this message
Mauricio Faria de Oliveira (mfo) wrote :

Jeniffer,

My previous assumption was wrong, the actual/current devmaps are indeed missing paths.

So the devmaps included in the first command/path discovery probably referred to the devmaps intended to be created (which were complete), but for some reason some paths failed to make it to the devmaps.

Thus I'll ask you to collect an actual / non-dryrun log this time with extra libdevmapper verbosity.
This will attempt to update the devmaps, so please ensure nothing is using/relying on them (e.g., mounted filesystems, swap devices, other applications accessing them directly for storage.)

# multipath -v8 > multipath-v8-update.log 2>&1

Also, if there's nothing using/relying on these multipath devices in your system, we should probably check the devmaps _creation_ too (not just the _updates_ above.)

So, if at all possible, after the above output has been collected already,
I'd like to ask you to flush them all (so the devmaps and /dev/mapper/ devices are removed), and recreate them from scratch, to check if there are behavior differences to note.

# multipath -v8 -F 2>&1 | tee multipath-v8-F.log # flush all devmaps
# multipath -v8 -l 2>&1 | tee multipath-v8-l.log # confirm there are no multipath devices listed.

# multipath -v8 > multipath-v8-create.log 2>&1

Thanks,
Mauricio

Revision history for this message
Jennifer Duong (jduong) wrote :

Mauricio, how should I go about this if all of my servers are SANboot?

Revision history for this message
Mauricio Faria de Oliveira (mfo) wrote :

Jennifer,

It's possible to script something that doesn't touch the multipath devices in use (e.g., root filesystem on FC/multipath.)

Please run this command and upload the generated sosreport tarball.
Then I can check which/how devices are being used and skip them.

$ sudo sosreport --case-id lp1860587 --batch --only-plugins block,boot,devicemapper,devices,filesys,grub2,kernel,lvm2,multipath,pci

Thanks,
Mauricio

Revision history for this message
Jennifer Duong (jduong) wrote :
Revision history for this message
Jennifer Duong (jduong) wrote :
Download full text (57.1 KiB)

Mauricio, this appears to be resolved after upgrading to the latest packages in -proposed. Will these changes be pushed into whichever ISO GA's?

root@ICTM1610S01H1:~# multipath -ll
3600a098000a0a2bc000075685ddf9fc5 dm-8 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:1:141 sdap 66:144 active ready running
| |- 12:0:0:141 sdbs 68:96 active ready running
| |- 13:0:1:141 sdfb 129:208 active ready running
| |- 14:0:0:141 sdge 131:160 active ready running
| |- 15:0:1:141 sdkn 66:432 active ready running
| `- 16:0:0:141 sdjj 8:464 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:141 sdm 8:192 active ready running
  |- 12:0:1:141 sdcv 70:48 active ready running
  |- 13:0:0:141 sddy 128:0 active ready running
  |- 14:0:1:141 sdhh 133:112 active ready running
  |- 15:0:0:141 sdin 135:112 active ready running
  `- 16:0:1:141 sdlh 67:496 active ready running
3600a098000a0a2bc0000756f5ddf9ffb dm-21 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:1:151 sdaz 67:48 active ready running
| |- 12:0:0:151 sdcc 69:0 active ready running
| |- 13:0:1:151 sdfl 130:112 active ready running
| |- 14:0:0:151 sdgo 132:64 active ready running
| |- 15:0:1:151 sdma 69:288 active ready running
| `- 16:0:0:151 sdjx 65:432 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:151 sdw 65:96 active ready running
  |- 12:0:1:151 sddf 70:208 active ready running
  |- 13:0:0:151 sdei 128:160 active ready running
  |- 14:0:1:151 sdhr 134:16 active ready running
  |- 15:0:0:151 sdjd 8:368 active ready running
  `- 16:0:1:151 sdls 68:416 active ready running
3600a098000a0a2bc0000756a5ddf9fd0 dm-11 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:1:143 sdar 66:176 active ready running
| |- 12:0:0:143 sdbu 68:128 active ready running
| |- 13:0:1:143 sdfd 129:240 active ready running
| |- 14:0:0:143 sdgg 131:192 active ready running
| |- 15:0:1:143 sdkq 66:480 active ready running
| `- 16:0:0:143 sdjn 65:272 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:143 sdo 8:224 active ready running
  |- 12:0:1:143 sdcx 70:80 active ready running
  |- 13:0:0:143 sdea 128:32 active ready running
  |- 14:0:1:143 sdhj 133:144 active ready running
  |- 15:0:0:143 sdir 135:176 active ready running
  `- 16:0:1:143 sdlj 68:272 active ready running
3600a098000a0a28a00009d785ddf9f0f dm-12 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:152 sdx 65:112 active ready running
| |- 12:0:1:152 sddg 70:224 active ready running
| |- 1...

Revision history for this message
Mauricio Faria de Oliveira (mfo) wrote :

Jennifer, that's good news, thanks for sharing.

Yes, it's reasonable to assume that changes in -proposed at this time will make it to the 20.04 ISO.

If you see any regressions with this again, please feel free to report back on this bug.

cheers,
Mauricio

Revision history for this message
Mauricio Faria de Oliveira (mfo) wrote :

For the purpose of status reporting on this LP bug,
since the issue was fixed by something in focal-proposed
but not the package reported the bug is reported against
(no multipath-tools nor lvm2/libdevmapper in -proposed),
I'll just mark the bug as Invalid, as cause is unknown
but confirmed not to be in multipath-tools.

Changed in multipath-tools (Ubuntu):
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.