Ubuntu 18.04 NVMe disks are not displayed on lsblk and /dev/disk/by-path (NVMe / Bolt ) (systemd?)

Bug #1758205 reported by bugproxy on 2018-03-22
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
The Ubuntu-power-systems project
Medium
Canonical Kernel Team
linux (Ubuntu)
Medium
Joseph Salisbury
Bionic
Medium
Joseph Salisbury

Bug Description

---Problem Description---

Bolt based NVMe disks (namespaces) are not displayed on lsblk and /dev/disk/by-path
# nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 1 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n10 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 10 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n11 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 11 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n12 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 12 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n13 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 13 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n14 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 14 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n15 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 15 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n16 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 16 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n17 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 17 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n18 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 18 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n19 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 19 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n2 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 2 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n20 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 20 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n21 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 21 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n22 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 22 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n23 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 23 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n24 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 24 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n25 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 25 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n26 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 26 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n27 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 27 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n28 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 28 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n29 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 29 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n3 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 3 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n30 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 30 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n31 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 31 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n32 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 32 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n4 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 4 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n5 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 5 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n6 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 6 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n7 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 7 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n8 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 8 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n9 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 9 28.67 GB / 28.67 GB 4 KiB + 0 B MN12MN12

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 931.5G 0 disk
??sda1 8:1 1 7M 0 part
??sda2 8:2 1 931.5G 0 part /
sdb 8:16 1 931.5G 0 disk
??sdb1 8:17 1 7M 0 part
??sdb2 8:18 1 9.3G 0 part
??sdb3 8:19 1 856.8G 0 part
??sdb4 8:20 1 1K 0 part
??sdb5 8:21 1 1G 0 part
??sdb6 8:22 1 64.4G 0 part
  ??rhelaa_ltciofvtr--spoon400-swap 253:0 0 4G 0 lvm
  ??rhelaa_ltciofvtr--spoon400-home 253:1 0 19.8G 0 lvm
  ??rhelaa_ltciofvtr--spoon400-root 253:2 0 40.6G 0 lvm

---uname output---
# uname -a Linux ltciofvtr-spoon4 4.15.0-10-generic #11-Ubuntu SMP Tue Feb 13 18:21:52 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux

Machine Type = AC922

---Steps to Reproduce---
 1> install Ubuntu 18.04 on AC922 system
2> make sure Bolt adapter is present in the system
0003:01:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller 172Xa [144d:a822] (rev 01)

3> create name space using following script
#!/bin/bash

device=/dev/nvme0
echo $device

nvme format $device

nvme set-feature $device -f 0x0b --value=0x0100

nvme delete-ns $device -n 0xFFFFFFFF
sleep 5
nvme list

nvme get-log $device -l 200 -i 4

max=`nvme id-ctrl $device | grep ^nn | awk '{print $NF}'`

for i in $(eval echo {1..$max})
do
    echo $i
    nvme create-ns $device --nsze=7000000 --ncap=7000000 --flbas=0 --dps=0
    nvme attach-ns $device --namespace-id=$i --controllers=`nvme list-ctrl $device | awk -F: '{print $2}'`
    sleep 2
    nvme get-log $device -l 200 -i 4
    sleep 2
done
nvme list

4> run #nvme ns-rescan /dev/nvme0 ; #lsblk

==

After discussion with the owner of lsblk and several maintainers of nvme developers, we removed slaves and holders for multipath NVME.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/nvme/host/core.c?h=v4.16-rc6&id=8a30ecc6e0ecbb9ae95daf499b2680b885ed0349

Revert "nvme: create 'slaves' and 'holders' entries for hidden cont

I have verified the patch with the latest kernel. Please include above patch in Ubuntu18.04

bugproxy (bugproxy) wrote : sosreport

Default Comment by Bridge

tags: added: architecture-ppc64le bugnameltc-164830 severity-medium targetmilestone-inin1804
Changed in ubuntu:
assignee: nobody → Ubuntu on IBM Power Systems Bug Triage (ubuntu-power-triage)
affects: ubuntu → linux (Ubuntu)
Changed in ubuntu-power-systems:
importance: Undecided → Medium
assignee: nobody → Canonical Kernel Team (canonical-kernel-team)
tags: added: triage-g
Changed in ubuntu-power-systems:
status: New → Triaged
Changed in linux (Ubuntu Bionic):
status: New → In Progress
importance: Undecided → Medium
assignee: Ubuntu on IBM Power Systems Bug Triage (ubuntu-power-triage) → Joseph Salisbury (jsalisbury)
Changed in ubuntu-power-systems:
status: Triaged → In Progress
Joseph Salisbury (jsalisbury) wrote :

I built a test kernel with commit 8a30ecc6e0ecbb9ae95daf499b2680b885ed0349. The test kernel can be downloaded from:
http://kernel.ubuntu.com/~jsalisbury/lp1758205

Can you test this kernel and see if it resolves this bug?

Note, to test this kernel, you need to install both the linux-image and linux-image-extra .deb packages.

Thanks in advance!

------- Comment From <email address hidden> 2018-04-10 11:41 EDT-------
Naveed, Can you verify the kernel if you want the patch in Ubuntu18.04?

bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2018-04-30 04:52 EDT-------
root@ltciofvtr-spoon4:~# nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 1 737.28 GB / 737.28 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n2 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 2 737.28 GB / 737.28 GB 4 KiB + 0 B MN12MN12
root@ltciofvtr-spoon4:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 931.5G 0 disk
??sda1 8:1 1 7M 0 part
??sda2 8:2 1 931.5G 0 part /
sdb 8:16 1 931.5G 0 disk
??sdb1 8:17 1 7M 0 part
??sdb2 8:18 1 9.3G 0 part
??sdb3 8:19 1 856.8G 0 part
??sdb4 8:20 1 1K 0 part
??sdb5 8:21 1 1G 0 part
??sdb6 8:22 1 64.4G 0 part
??rhelaa_ltciofvtr--spoon400-swap 253:0 0 4G 0 lvm
??rhelaa_ltciofvtr--spoon400-home 253:1 0 19.8G 0 lvm
??rhelaa_ltciofvtr--spoon400-root 253:2 0 40.6G 0 lvm
nvme0n1 259:1 0 686.7G 0 disk
nvme0n2 259:3 0 686.7G 0 disk
root@ltciofvtr-spoon4:~# uname -a
Linux ltciofvtr-spoon4 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:14:44 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux

bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2018-05-01 11:08 EDT-------
Still namespaces are not visible under /dev/disk/by-path

# ll /dev/disk/by-path/ | grep -i nvme
#
# nvme list | grep nvme | wc -l
32
#

Joseph Salisbury (jsalisbury) wrote :

Can you ensure you are booted into the test kernel posted in comment #2? It should have the following text in the uname -a output: "lp1758205"

bugproxy (bugproxy) wrote :
Download full text (5.4 KiB)

------- Comment From <email address hidden> 2018-05-17 02:50 EDT-------
(In reply to comment #15)
> Still namespaces are not visible under /dev/disk/by-path
>
>
> # ll /dev/disk/by-path/ | grep -i nvme
> #
> # nvme list | grep nvme | wc -l
> 32
> #

I am able to view the(In reply to comment #16)
> Can you ensure you are booted into the test kernel posted in comment #2? It
> should have the following text in the uname -a output: "lp1758205"

we do not see the nvme devices in disk/by-path even in above said kernel and -20 kernel

===console logs===
# ll /dev/disk/by-path/ | grep -i nvme
# uname -a
Linux ltciofvtr-spoon4 4.15.0-12-generic #13~lp1758205 SMP Tue Mar 27 16:09:39 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux
# nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 1 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n10 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 10 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n11 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 11 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n12 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 12 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n13 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 13 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n14 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 14 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n15 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 15 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n16 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 16 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n17 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 17 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n18 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 18 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n19 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 19 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n2 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 2 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n20 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 20 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n21 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 21 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n22 S3RVNA0J600206 PCIe3 1.6TB NVMe Flash Adapter II x8 22 24.58 GB / 24.58 GB 4 KiB + 0 B MN12MN12
/dev/nvme0n23 S3RVNA0J600206 PCIe3 1.6TB NV...

Read more...

Joseph Salisbury (jsalisbury) wrote :

Actually commit 8a30ecc6e0ecbb9ae95daf499b2680b885ed0349 is in bionic as of 4.15.0-13:

1a3abe0db677 Revert "nvme: create 'slaves' and 'holders' entries for hidden controllers"

git describe --contains 1a3abe0db677
Ubuntu-4.15.0-13.14~323

Can you see if this bug still exists with the latest Bionic kernel, that should be neweter than -13?

Manoj Iyer (manjo) on 2018-05-21
Changed in ubuntu-power-systems:
status: In Progress → Incomplete
Changed in linux (Ubuntu Bionic):
status: In Progress → Incomplete
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2018-05-22 17:45 EDT-------
I built the latest upstream kernel on my briggs + Bolt. Looks the patch only fixed lsblk issue. We can drop the following patch through this bug, then open a new one for /dev/disk/by-path issue.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/nvme/host/core.c?h=v4.16-rc6&id=8a30ecc6e0ecbb9ae95daf499b2680b885ed0349

If I disable NVME_MULTIPATH, I didn't see the issue. Since NVME_MULTIPATH is new feature, probably still have some issues around. I will work on /dev/disk/by-path issue in the new bug.

Thanks,
Wendy

Changed in linux (Ubuntu):
status: In Progress → Fix Released
Changed in linux (Ubuntu Bionic):
status: Incomplete → Fix Released
Manoj Iyer (manjo) on 2018-05-24
Changed in ubuntu-power-systems:
status: Incomplete → Fix Released
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2018-06-08 11:10 EDT-------
linux-x6bg:/usr/src/packages/BUILD/systemd-v234+suse.160.gd5dfab21f # ./udevadm test-builtin path_id /sys/block/nvme1n1
calling: test-builtin
Load module index
Network interface NamePolicy= disabled by default.
Parsed configuration file /usr/lib/systemd/network/99-default.link
Created link configuration context.
Unload module index
Unloaded link configuration context.

Looks udev couldn't find path_id for this Bolt device.

linux-x6bg:/usr/src/packages/BUILD/systemd-v234+suse.160.gd5dfab21f # ./udevadm test-builtin path_id /sys/block/nvme0n1
calling: test-builtin
Load module index
Network interface NamePolicy= disabled by default.
Parsed configuration file /usr/lib/systemd/network/99-default.link
Created link configuration context.
ID_PATH=pci-001e:90:00.0-nvme-1 -----------> This is one without issue.
ID_PATH_TAG=pci-001e_90_00_0-nvme-1
Unload module index
Unloaded link configuration context.

Jun Kong (jknonub) wrote :
Download full text (3.2 KiB)

I have updated kernel to 4.15.0-23-generic and still see this problem:

root@localhost:/sys/block# nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 CJH0020005EB HUSMR7616BDP301 1 1.92 TB / 1.92 TB 512 B + 8 B KNECD105

root@localhost:/sys/block# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 476M 0 part /boot
├─sda2 8:2 0 30.8G 0 part [SWAP]
└─sda3 8:3 0 434.6G 0 part /
sdb 8:16 0 931.5G 0 disk
sr0 11:0 1 1024M 0 rom
root@localhost:/sys/block# ls -l /sys/block/
total 0
lrwxrwxrwx 1 root root 0 Jun 25 15:05 loop0 -> ../devices/virtual/block/loop0
lrwxrwxrwx 1 root root 0 Jun 25 15:05 loop1 -> ../devices/virtual/block/loop1
lrwxrwxrwx 1 root root 0 Jun 25 15:05 loop2 -> ../devices/virtual/block/loop2
lrwxrwxrwx 1 root root 0 Jun 25 15:05 loop3 -> ../devices/virtual/block/loop3
lrwxrwxrwx 1 root root 0 Jun 25 15:05 loop4 -> ../devices/virtual/block/loop4
lrwxrwxrwx 1 root root 0 Jun 25 15:05 loop5 -> ../devices/virtual/block/loop5
lrwxrwxrwx 1 root root 0 Jun 25 15:05 loop6 -> ../devices/virtual/block/loop6
lrwxrwxrwx 1 root root 0 Jun 25 15:05 loop7 -> ../devices/virtual/block/loop7
lrwxrwxrwx 1 root root 0 Jun 25 15:05 nvme0n1 -> ../devices/pci0000:00/0000:00:01.1/0000:02:00.0/nvme/nvme0/nvme0n1
lrwxrwxrwx 1 root root 0 Jun 25 15:05 sda -> ../devices/pci0000:00/0000:00:1f.2/ata3/host2/target2:0:0/2:0:0:0/block/sda
lrwxrwxrwx 1 root root 0 Jun 25 15:05 sdb -> ../devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/block/sdb
lrwxrwxrwx 1 root root 0 Jun 25 15:05 sr0 -> ../devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sr0
root@localhost:/sys/block# lspci -nn | grep -i Non
02:00.0 Non-Volatile memory controller [0108]: HGST, Inc. Ultrastar SN200 Series NVMe SSD [1c58:0023] (rev 02)
root@localhost:/sys/block# blockdev --getsize64 /dev/nvme0n1
0
root@localhost:/sys/block#

# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

# dmidecode -t 2
# dmidecode 3.1
Getting SMBIOS data from sysfs.
SMBIOS 2.6 present.

Handle 0x0002, DMI type 2, 15 bytes
Base Board Information
        Manufacturer: ASUSTeK Computer INC.
        Product Name: P8Z68-V PRO
        Version: Rev 1.xx
        Serial Number: 110104170000300
        Asset Tag: To be filled by O.E.M.
        Features:
                Board is a hosting board
                Board is replaceable
        Location In Chassis: To be filled by O.E.M.
        Chassis Handle: 0x0003
        Type: Motherboard
        Contained Object Han...

Read more...

Andrew Cloke (andrew-cloke) wrote :

As this bug has been closed, could you raise a new bug (possibly referencing this one) for this issue? Thanks.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Bug attachments