[20.04] Missing thin-provisioning-tools prevents VG with thin pool LV from being (de)activated, but not its creation

Bug #1657646 reported by bugproxy on 2017-01-19
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
The Ubuntu-power-systems project
Medium
Canonical Server Team
lvm2 (Debian)
New
Unknown
lvm2 (Ubuntu)
Medium
Unassigned

Bug Description

Creating a thin pool LV is allowed even when thin-provisioning-tools is not installed. But deactivating or activating that VG fails. Since deactivating the VG usually only happens at reboot, the user might fail to notice this big problem until then.

I think the lvconvert tool, used to combine the two "thin LVs" into a thin pool LV, should refuse to run if thin-provisioning-tools, or the needed scripts, aren't installed.

Steps to reproduce:
root@15-89:~# vgcreate vg /dev/vdb1
  Volume group "vg" successfully created

root@15-89:~# vgs
  VG #PV #LV #SN Attr VSize VFree
  vg 1 0 0 wz--n- 40.00g 40.00g

root@15-89:~# lvcreate -n pool0 -l 90%VG vg
  Logical volume "pool0" created.

root@15-89:~# lvcreate -n pool0meta -l 5%VG vg
  Logical volume "pool0meta" created.

root@15-89:~# lvs
  LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  pool0 vg -wi-a----- 36.00g
  pool0meta vg -wi-a----- 2.00g

root@15-89:~# ll /dev/mapper/
total 0
drwxr-xr-x 2 root root 100 Jun 21 14:15 ./
drwxr-xr-x 20 root root 3820 Jun 21 14:15 ../
crw------- 1 root root 10, 236 Jun 21 13:15 control
lrwxrwxrwx 1 root root 7 Jun 21 14:14 vg-pool0 -> ../dm-0
lrwxrwxrwx 1 root root 7 Jun 21 14:15 vg-pool0meta -> ../dm-1

root@15-89:~# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
  WARNING: Converting logical volume vg/pool0 and vg/pool0meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert vg/pool0 and vg/pool0meta? [y/n]: y
  Converted vg/pool0 to thin pool.

root@15-89:~# ll /dev/mapper/
total 0
drwxr-xr-x 2 root root 120 Jun 21 14:15 ./
drwxr-xr-x 20 root root 3840 Jun 21 14:15 ../
crw------- 1 root root 10, 236 Jun 21 13:15 control
lrwxrwxrwx 1 root root 7 Jun 21 14:15 vg-pool0 -> ../dm-2
lrwxrwxrwx 1 root root 7 Jun 21 14:15 vg-pool0_tdata -> ../dm-1
lrwxrwxrwx 1 root root 7 Jun 21 14:15 vg-pool0_tmeta -> ../dm-0
root@15-89:~# lvs -a
  LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%S
ync Convert
  [lvol0_pmspare] vg ewi------- 2.00g
  pool0 vg twi-a-tz-- 36.00g 0.00 0.01
  [pool0_tdata] vg Twi-ao---- 36.00g
  [pool0_tmeta] vg ewi-ao---- 2.00g

If you now reboot the system, all that is gone:
root@15-89:~# ll /dev/mapper/
total 0
drwxr-xr-x 2 root root 60 Jun 21 14:28 ./
drwxr-xr-x 19 root root 3760 Jun 21 14:28 ../
crw------- 1 root root 10, 236 Jun 21 14:28 control

The same happens if you deactivate the VG (which the reboot undoubtedly triggers). It fails because of a missing /usr/sbin/thin_check which is provided by the thin-provisioning-tools package:

root@15-89:~# vgchange -a n
  /usr/sbin/thin_check: execvp failed: No such file or directory
  WARNING: Integrity check of metadata for pool vg/pool0 failed.
  0 logical volume(s) in volume group "vg" now active

root@15-89:~# ll /dev/mapper/
total 0
drwxr-xr-x 2 root root 60 Jun 21 14:29 ./
drwxr-xr-x 19 root root 3760 Jun 21 14:29 ../
crw------- 1 root root 10, 236 Jun 21 14:28 control

Related branches

bugproxy (bugproxy) wrote : dmesg
  • dmesg Edit (23.8 KiB, application/octet-stream)

Default Comment by Bridge

tags: added: architecture-ppc64le bugnameltc-150003 severity-high targetmilestone-inin---

Default Comment by Bridge

Changed in ubuntu:
assignee: nobody → Taco Screen team (taco-screen-team)
affects: ubuntu → docker (Ubuntu)
Manoj Iyer (manjo) on 2017-01-23
Changed in docker (Ubuntu):
assignee: Taco Screen team (taco-screen-team) → Jon Grimm (jgrimm)
importance: Undecided → High
bugproxy (bugproxy) on 2017-01-23
tags: added: targetmilestone-inin16042
removed: targetmilestone-inin---

------- Comment From <email address hidden> 2017-01-23 17:10 EDT-------
An sosreport would be useful for those looking into the problem. Additionally, setting the log level in /etc/lvm/lvm.conf level= to a high value so we can get some debug output would be great.

BTW, maybe I missed something but the first instruction for the recreation was:

1. vgcreate docker-storage <disk>

Did you do a pvcreate on <disk> before hand? This adds some lvm2 metadata so the disk is picked up by a scan. pvs command should list it if was done.

Hi,
as usual bugproxy accumulated a lot of info to go through.
I'll try to summarize - please feel free to correct - "VG/LV are not available in /dev/mapper/ after reboot".
Would that be a proper (and simpler) definition of the issue?

summary: - ISST-LTE:pVM:bamlp4:Ubuntu16.04.02VM:Docker:Docker is not coming up
- after system reboot
+ VG/LV are not available in /dev/mapper/ after reboot
Download full text (9.5 KiB)

I tried to recreate.
Note: as it was reported on power + Xenial this test was done on ppc64el Xenial as of today

To do so with as much debugging as possible I created a normal Xenial KVM Guest via
$ uvt-kvm create --cpu 4 --password=ubuntu paelzer-testlvm-xenial release=xenial
Then I added a few more disks to be used as PVs
$ sudo qemu-img create -f qcow2 test-lvm-disk1.qcow2 8
And added those to the Guest.

The guest then initially looks like:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 8G 0 disk
vdb 253:16 0 8G 0 disk
vdc 253:32 0 8G 0 disk
vdd 253:48 0 366K 0 disk
vde 253:64 0 8G 0 disk
|-vde1 253:65 0 8G 0 part /
`-vde2 253:66 0 8M 0 part

Then the usual flow is
1. fdisk, create partition set LVM partition type (8e)
$ sudo fdisk /dev/vd[abc]
2. Full PVs on all the three disks
$ sudo pvcreate /dev/vd[abc]
3. vgcreate a single VG out of all of the PVs
$ sudo vgcreate vg /dev/vda1 /dev/vdb1 /dev/vdc1

At this point it looks like this:
$ sudo vgdisplay vg
  --- Volume group ---
  VG Name vg
  System ID
  Format lvm2
  Metadata Areas 3
  Metadata Sequence No 1
  VG Access read/write
  VG Status resizable
  MAX LV 0
  Cur LV 0
  Open LV 0
  Max PV 0
  Cur PV 3
  Act PV 3
  VG Size 23.99 GiB
  PE Size 4.00 MiB
  Total PE 6141
  Alloc PE / Size 0 / 0
  Free PE / Size 6141 / 23.99 GiB
  VG UUID 1QMFbn-5DAW-T9IE-Fdfd-9RZK-t8gl-5nte5r

Ok, create normal as well as thin LVs out of that now.
First of all thin provisioning is not mainstream, the dependency is only a suggest, so install the tools
$ sudo apt-get install thin-provisioning-tools
Then create the normal LV
$ sudo lvcreate -L 5G --name lv_normal vg
And finally a thin LV
$ sudo lvcreate --size 10G --virtualsize 5G --thinpool mythinpool --name lv_thin vg
Lets go harder and overprovision the thinpool
$ sudo lvcreate --virtualsize 5G --thinpool mythinpool --name lv_thin2 vg
$ sudo lvcreate --virtualsize 5G --thinpool mythinpool --name lv_thin3 vg

With that in place my LVs look like:
$ sudo lvdisplay
  --- Logical volume ---
  LV Path /dev/vg/lv_normal
  LV Name lv_normal
  VG Name vg
  LV UUID aCtNC0-gbx1-uHoB-3dC8-dfhl-NBxd-Axm879
  LV Write Access read/write
  LV Creation host, time paelzer-testlvm-xenial, 2017-01-25 12:54:02 +0000
  LV Status available
  # open 0
  LV Size 5.00 GiB
  Current LE 1280
  Segments 1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device 252:0

  --- Logical volume ---
  LV Name mythinpool
  VG Name vg
  LV UUID UCKy8A-ovc6-Qh9n-wrEM-c2s2-myvD-hpLhb0
  LV Write Access read/write
  LV Creation host, time paelzer-testlvm-xenial, 2017-01-25 13:04:03 +0000
  LV Pool metadata mythinpool_tmeta
  LV Pool d...

Read more...

Changed in docker (Ubuntu):
status: New → Incomplete

Internal: for anyone else debugging on this, the mentioned test setup is on Diamond in guest "paelzer-testlvm-xenial"

Jon Grimm (jgrimm) on 2017-01-26
Changed in docker (Ubuntu):
assignee: Jon Grimm (jgrimm) → nobody
Tianon Gravi (tianon) on 2017-02-10
affects: docker (Ubuntu) → docker.io (Ubuntu)
Download full text (10.1 KiB)

------- Comment From <email address hidden> 2017-02-22 00:29 EDT-------
(In reply to comment #30)
>
> Hi,
> as usual bugproxy accumulated a lot of info to go through.
> I'll try to summarize - please feel free to correct - "VG/LV are not
> available in /dev/mapper/ after reboot".
> Would that be a proper (and simpler) definition of the issue?
>
> I tried to recreate.
> Note: as it was reported on power + Xenial this test was done on ppc64el
> Xenial as of today
>
> To do so with as much debugging as possible I created a normal Xenial KVM
> Guest via
> $ uvt-kvm create --cpu 4 --password=ubuntu paelzer-testlvm-xenial
> release=xenial
> Then I added a few more disks to be used as PVs
> $ sudo qemu-img create -f qcow2 test-lvm-disk1.qcow2 8
> And added those to the Guest.
>
> The guest then initially looks like:
> $ lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> vda 253:0 0 8G 0 disk
> vdb 253:16 0 8G 0 disk
> vdc 253:32 0 8G 0 disk
> vdd 253:48 0 366K 0 disk
> vde 253:64 0 8G 0 disk
> |-vde1 253:65 0 8G 0 part /
> `-vde2 253:66 0 8M 0 part
>
> Then the usual flow is
> 1. fdisk, create partition set LVM partition type (8e)
> $ sudo fdisk /dev/vd[abc]
> 2. Full PVs on all the three disks
> $ sudo pvcreate /dev/vd[abc]
> 3. vgcreate a single VG out of all of the PVs
> $ sudo vgcreate vg /dev/vda1 /dev/vdb1 /dev/vdc1
>
> At this point it looks like this:
> $ sudo vgdisplay vg
> --- Volume group ---
> VG Name vg
> System ID
> Format lvm2
> Metadata Areas 3
> Metadata Sequence No 1
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 0
> Open LV 0
> Max PV 0
> Cur PV 3
> Act PV 3
> VG Size 23.99 GiB
> PE Size 4.00 MiB
> Total PE 6141
> Alloc PE / Size 0 / 0
> Free PE / Size 6141 / 23.99 GiB
> VG UUID 1QMFbn-5DAW-T9IE-Fdfd-9RZK-t8gl-5nte5r
>
> Ok, create normal as well as thin LVs out of that now.
> First of all thin provisioning is not mainstream, the dependency is only a
> suggest, so install the tools
> $ sudo apt-get install thin-provisioning-tools
> Then create the normal LV
> $ sudo lvcreate -L 5G --name lv_normal vg
> And finally a thin LV
> $ sudo lvcreate --size 10G --virtualsize 5G --thinpool mythinpool --name
> lv_thin vg
> Lets go harder and overprovision the thinpool
> $ sudo lvcreate --virtualsize 5G --thinpool mythinpool --name lv_thin2 vg
> $ sudo lvcreate --virtualsize 5G --thinpool mythinpool --name lv_thin3 vg
>
> With that in place my LVs look like:
> $ sudo lvdisplay
> --- Logical volume ---
> LV Path /dev/vg/lv_normal
> LV Name lv_normal
> VG Name vg
> LV UUID aCtNC0-gbx1-uHoB-3dC8-dfhl-NBxd-Axm879
> LV Write Access read/write
> LV Creation host, time paelzer-testlvm-xenial, 2017-01-25 12:54:02 +0000
> LV Status available
> # open 0
> LV Size 5.00 GiB
> Current LE 1280
> Segments 1
> Allocation inher...

Hi,
the proxy did well at hiding your actual reply by quoting my full pust.

But since you asked for updates I have to kindly ask you instead - please read my detailed repro approach in comment #5 (things just worked for me).

It is ending with:

> Could you either
> 1. report your exact steps on a fresh system to cause this
> or
> 2. modify the steps I reported until the issue shows up

We would need that to get any further analyzing this - see also status=incomplete which is waiting for info.

Default Comment by Bridge

------- Comment From <email address hidden> 2017-05-11 05:45 EDT-------
I'm once again updating the steps that were followed to configure a thinpool,

pvcreate <disk>
vgcreate docker-storage <disk>
lvcreate -y -n thinpool docker-storage -l 95%VG

lvcreate -y -n thinpoolmeta docker-storage -l 1%VG

lvconvert -y --zero n -c 64K --thinpool docker-storage/thinpool --poolmetadata docker-storage/thinpoolmeta

Then configured docker to pick up this storage
and restarted the docker deamon.

Created containers and rebooted the partitions.

Docker daemon not coming up.

bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2017-05-12 01:19 EDT-------
Canonical,

The exact steps of recreate is mentioned in the previous update. let us know if you need more information ?

Thanks for providing the steps, I have no time to set this up "right now" but 'm setting it back to new to mark that we are no more waiting on info from you until further analyzed.

Changed in docker.io (Ubuntu):
status: Incomplete → New
Andreas Hasenack (ahasenack) wrote :

TL;DR "sudo apt install thin-provisioning-tools" fixes it.

Adding debugging steps here for posterity.

ubuntu@15-89:~$ cat /etc/docker/daemon.json
{
  "storage-driver": "devicemapper",
  "storage-opts": [
    "dm.thinpooldev=/dev/mapper/docker--storage-thinpool",
    "dm.use_deferred_removal=true",
    "dm.use_deferred_deletion=true"
  ]
}

After reboot:

ubuntu@15-89:~$ docker info
Cannot connect to the Docker daemon. Is the docker daemon running on this host?

/var/log/syslog:
Jun 20 20:42:11 15-89 dockerd[1542]: time="2017-06-20T20:42:11.938899425Z" level=fatal msg="Error starting daemon: error initializing graphdriver: devicemapper: Non existing device docker--storage-thinpool"

root@15-89:~# ll /dev/mapper/
total 0
drwxr-xr-x 2 root root 60 Jun 20 20:41 ./
drwxr-xr-x 19 root root 3760 Jun 20 20:41 ../
crw------- 1 root root 10, 236 Jun 20 20:41 control

But lvs shows it:
root@15-89:~# lvs
  LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  thinpool docker-storage twi---t--- 38.00g

vgchange -a y didn't work, complaining that a tool called thin_check wasn't available. This tool comes with thin-provisioning-tools.

I then installed thin-provisioning-tools and rebooted one more time. Now docker runs fine.

What's left to check here is if perhaps thin-provisioning-tools should be a strong or weak dependency of docker.io, like a Recommends or Suggests.

Andreas Hasenack (ahasenack) wrote :

@paelzer noted this in his comment #5:

"""
First of all thin provisioning is not mainstream, the dependency is only a suggest, so install the tools
$ sudo apt-get install thin-provisioning-tools
"""

Changed in docker.io (Ubuntu):
status: New → Triaged
summary: - VG/LV are not available in /dev/mapper/ after reboot
+ Missing thin-provisioning-tools prevent VG from being activated
summary: - Missing thin-provisioning-tools prevent VG from being activated
+ Missing thin-provisioning-tools prevent VG from being (de)activated
description: updated
affects: docker.io (Ubuntu) → lvm2 (Ubuntu)
summary: - Missing thin-provisioning-tools prevent VG from being (de)activated
+ Missing thin-provisioning-tools prevents VG from being (de)activated
summary: - Missing thin-provisioning-tools prevents VG from being (de)activated
+ Missing thin-provisioning-tools prevents VG with thin pool LV from being
+ (de)activated, but not its creation
description: updated
description: updated
description: updated
description: updated

I'm marking this bug as "new" again because of the package change (from docker.io to lvm2), to allow it to be triaged again in this new context.

Changed in lvm2 (Ubuntu):
status: Triaged → New
Changed in lvm2 (Debian):
status: Unknown → New
bugproxy (bugproxy) on 2017-06-23
tags: removed: bugnameltc-150003 severity-high
bugproxy (bugproxy) on 2017-06-27
tags: added: bugnameltc-150003 severity-high
Download full text (7.4 KiB)

------- Comment From <email address hidden> 2017-06-27 08:44 EDT-------
I tried again after installing thin-provisioning-tools, after the reboot.

The docker service/daemon was up.
I don't see the issue

root@bamlp5:~# apt-get install thin-provisioning-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
thin-provisioning-tools
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 339 kB of archives.
After this operation, 1,627 kB of additional disk space will be used.
Get:1 http://gb.ports.ubuntu.com/ubuntu-ports xenial/universe ppc64el thin-provisioning-tools ppc64el 0.5.6-1ubuntu1 [339 kB]
Fetched 339 kB in 0s (429 kB/s)
Selecting previously unselected package thin-provisioning-tools.
(Reading database ... 64237 files and directories currently installed.)
Preparing to unpack .../thin-provisioning-tools_0.5.6-1ubuntu1_ppc64el.deb ...
Unpacking thin-provisioning-tools (0.5.6-1ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up thin-provisioning-tools (0.5.6-1ubuntu1) ...

root@bamlp5:~# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.6
Storage Driver: devicemapper
Pool Name: docker--storage-thinpool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 11.8 MB
Data Space Total: 30.6 GB
Data Space Available: 30.59 GB
Metadata Space Used: 172 kB
Metadata Space Total: 318.8 MB
Metadata Space Available: 318.6 MB
Thin Pool Minimum Free Space: 3.06 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Library Version: 1.02.110 (2015-10-30)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-81-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: ppc64le
CPUs: 8
Total Memory: 967.4 MiB
Name: bamlp5
ID: EFRP:SO44:CIMD:TPH5:BBKN:RYBT:QZGE:7U7H:DOYF:OBI6:SKLG:DUCO
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8

root@bamlp5:~# reboot
Connection to bamlp5 closed by remote host.
Connection to bamlp5 closed.

[vinutha@kte ~]$ ssh root@bamlp5
root@bamlp5's password:
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-81-generic ppc64le)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage

********************************************************************************

IBM Business Use Statement
--------------------------

IBM's internal systems must only be used for conducting IBM's business or
for purposes authorized by IBM management.

Use is subject to audit at any time by IBM management.

Distribution: Ubuntu 16.04.2 LTS
Kernel Build: 4.4.0-81-generic
System Name : bamlp5
Model/Type : 8247-22L
Platform : powerpc64le
***************************...

Read more...

bugproxy (bugproxy) wrote :
Download full text (7.3 KiB)

------- Comment From <email address hidden> 2017-06-29 03:11 EDT-------
(In reply to comment #52)
> I tried again after installing thin-provisioning-tools, after the reboot.
>
> The docker service/daemon was up.
> I don't see the issue
>
> root@bamlp5:~# apt-get install thin-provisioning-tools
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> The following NEW packages will be installed:
> thin-provisioning-tools
> 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
> Need to get 339 kB of archives.
> After this operation, 1,627 kB of additional disk space will be used.
> Get:1 http://gb.ports.ubuntu.com/ubuntu-ports xenial/universe ppc64el
> thin-provisioning-tools ppc64el 0.5.6-1ubuntu1 [339 kB]
> Fetched 339 kB in 0s (429 kB/s)
> Selecting previously unselected package thin-provisioning-tools.
> (Reading database ... 64237 files and directories currently installed.)
> Preparing to unpack .../thin-provisioning-tools_0.5.6-1ubuntu1_ppc64el.deb
> ...
> Unpacking thin-provisioning-tools (0.5.6-1ubuntu1) ...
> Processing triggers for man-db (2.7.5-1) ...
> Setting up thin-provisioning-tools (0.5.6-1ubuntu1) ...
>
> root@bamlp5:~# docker info
> Containers: 0
> Running: 0
> Paused: 0
> Stopped: 0
> Images: 0
> Server Version: 1.12.6
> Storage Driver: devicemapper
> Pool Name: docker--storage-thinpool
> Pool Blocksize: 65.54 kB
> Base Device Size: 10.74 GB
> Backing Filesystem: xfs
> Data file:
> Metadata file:
> Data Space Used: 11.8 MB
> Data Space Total: 30.6 GB
> Data Space Available: 30.59 GB
> Metadata Space Used: 172 kB
> Metadata Space Total: 318.8 MB
> Metadata Space Available: 318.6 MB
> Thin Pool Minimum Free Space: 3.06 GB
> Udev Sync Supported: true
> Deferred Removal Enabled: false
> Deferred Deletion Enabled: false
> Deferred Deleted Device Count: 0
> Library Version: 1.02.110 (2015-10-30)
> Logging Driver: json-file
> Cgroup Driver: cgroupfs
> Plugins:
> Volume: local
> Network: null host bridge overlay
> Swarm: inactive
> Runtimes: runc
> Default Runtime: runc
> Security Options: apparmor seccomp
> Kernel Version: 4.4.0-81-generic
> Operating System: Ubuntu 16.04.2 LTS
> OSType: linux
> Architecture: ppc64le
> CPUs: 8
> Total Memory: 967.4 MiB
> Name: bamlp5
> ID: EFRP:SO44:CIMD:TPH5:BBKN:RYBT:QZGE:7U7H:DOYF:OBI6:SKLG:DUCO
> Docker Root Dir: /var/lib/docker
> Debug Mode (client): false
> Debug Mode (server): false
> Registry: https://index.docker.io/v1/
> WARNING: No swap limit support
> Insecure Registries:
> 127.0.0.0/8
>
> root@bamlp5:~# reboot
> Connection to bamlp5 closed by remote host.
> Connection to bamlp5 closed.
>
> [vinutha@kte ~]$ ssh root@bamlp5
> root@bamlp5's password:
> Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-81-generic ppc64le)
>
> * Documentation: https://help.ubuntu.com
> * Management: https://landscape.canonical.com
> * Support: https://ubuntu.com/advantage
>
> *****************************************************************************
> ***
>
> IBM Business Use Statement
> --------------------------
>
> IBM's internal systems must only be used for conducting IBM's business or
> for purposes aut...

Read more...

> Canonical, We installed thin-provisioning-tools and have tested and not able to see the issue.
> Our tester need some clarification related to the update done above..

The summary for you and your tester for now IMHO is:
1. it is not a "bug" in docker.io
2. if thin-provisioning-tools are installed things work just fine
3. the bug was re-targeted against LVM2 to consider there if
  3a. lvm should reject to create thin pools if the tools are missing
  3b. lvm should have a harder dependency (component mismatch, so MIR needed)

I think for now we are waiting on feedback from lvm maintainers in Ubuntu and Debian.

bugproxy (bugproxy) wrote : dmesg
  • dmesg Edit (23.8 KiB, application/octet-stream)

Default Comment by Bridge

IMHO thin-provisioning is an optional volumes type in LVM2 and is not required for operating LVM. The fact that lvconvert works to convert things to/from optionally supported types, is imho ok. In the same way we ship many tools that can convert formats but not necessarily open those for modification. E.g. i can use qemu-img to convert disk images to e.g. VMWare format but I might not have any hypervisor that knows how to boot such an image and allow to modify it (e.g. upgrade packages).

From usability point of view, it might help if attempts to activate thin volumes should result in messaging like "maybe install packages foo bar?" Similar to how command-not-found operates.

Simon Clift (ssclift-gmail) wrote :

This is related to a problem I've encountered today with lvmcache partitions.

From a fresh Ubuntu 18.04 install, with lvm2 installed, I create an LVM on HDD's with an SSD partitioned to provide cache:

lvcreate -n home_lv_root -L 1.2Tb vg0
vgextend vg0 /dev/nvme0n1p6
lvcreate -n home_lv_cache -L 17G vg0 /dev/nvme0n1p6
lvcreate -n home_lv_cachemeta -L 1.6G vg0 /dev/nvme0n1p6
lvconvert --type cache-pool --poolmetadata vg0/home_lv_cachemeta \
                                           vg0/home_lv_cache
lvconvert --type cache --cachepool vg0/home_lv_cache vg0/home_lv_root

then reboot. The volume vg0/home_lv_root disappears.

I try to find it:

pvscan
vgscan
lvscan
vgchange -ay

The vgchange complains it cannot complete "cache_check".

Here lies the problem: /usr/sbin/cache_check is in thin-provisioning-tools

So without thin-provisioning-tools it is possible to create an LVM2 volume which disappears on reboot.

This suggests that thin-provisioning-tools must be a prerequisite, or that cache_check be moved into package lvm2.

This also appears to be related to bug #1423796, if an LVM volume has a cache attached and is required at boot time, boot will fail completely. Creating a cached root volume seemed like a sensible thing to do. This also occurred to me today on a server config, and I moved the OS entirely onto an SSD partition rather than deal with it.

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in lvm2 (Ubuntu):
status: New → Confirmed

------- Comment From <email address hidden> 2019-04-09 17:29 EDT-------
Per above comment, this is a legitimate problem and needs to be worked. Not sure in which release it should be worked, though. But moving out of DEFERRED.

tags: added: targetmilestone-inin---
removed: targetmilestone-inin16042
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2019-05-02 02:06 EDT-------
Canonical , Could you confirm which release this issue could be fixed ?

Changed in ubuntu-power-systems:
assignee: nobody → Canonical Foundations Team (canonical-foundations)
importance: Undecided → High
status: New → Triaged
tags: added: id-5ccc50675baa0c05bc322dce
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2019-05-08 04:56 EDT-------
Canonical , Could you confirm which release this issue could be fixed ?

We will investigate and update this bug shortly.

I notice that this issue was raised some time ago. Is there any additional information you can share as to recent specific use cases or scenarios?

Thanks.

Manoj Iyer (manjo) wrote :

The lvm2 package lists thin-provisioning-tools as a suggested package rather than "Depends", this is because thin-provisioning-tools is in the universe archive (Community supported) and lvm2 is in the main archive (Canonical supported). So, when you install lvm2 you should see thin-provisioning-tools under "suggested packages", and you have to separately list thin-provisioning-tools on the command line when you install lvm2. This is the case for all architectures not just for Power.

Manoj Iyer (manjo) wrote :

I have lowered the priority on this bug to Medium and we are expected to address this in 20.04. The proposed fix is to add thin-provisioning-tools package to "recommends" in lvm2 so that it will be automatically installed when we install lvm2.

Changed in lvm2 (Ubuntu):
importance: High → Medium
Changed in ubuntu-power-systems:
importance: High → Medium
Changed in lvm2 (Ubuntu):
milestone: none → later
assignee: nobody → Canonical Foundations Team (canonical-foundations)
summary: - Missing thin-provisioning-tools prevents VG with thin pool LV from being
- (de)activated, but not its creation
+ [20.04]Missing thin-provisioning-tools prevents VG with thin pool LV
+ from being (de)activated, but not its creation
bugproxy (bugproxy) on 2019-05-09
tags: added: targetmilestone-inin2004
removed: targetmilestone-inin---
summary: - [20.04]Missing thin-provisioning-tools prevents VG with thin pool LV
+ [20.04] Missing thin-provisioning-tools prevents VG with thin pool LV
from being (de)activated, but not its creation
Changed in lvm2 (Ubuntu):
assignee: Canonical Foundations Team (canonical-foundations) → nobody
Changed in ubuntu-power-systems:
assignee: Canonical Foundations Team (canonical-foundations) → Canonical Server Team (canonical-server)
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package lvm2 - 2.03.02-2ubuntu6

---------------
lvm2 (2.03.02-2ubuntu6) eoan; urgency=medium

  * d/control: stop dropping thin-provisioning-tools to Suggests as it
    is ready to be promoted via MIR LP 1828887. Fixes usability issues
    of thin-provisioning-tools not being installed by default (LP: #1657646).
    - d/control: also add thin-provisioning-tools build-dep as configure
      wants it around for some checks at build time.
  * d/p/lp-1842436-*: Avoid creation of mixed-blocksize PV on LVM
    volume groups as it can cause FS corruption (LP: #1842436)

 -- Christian Ehrhardt <email address hidden> Fri, 06 Sep 2019 08:23:10 +0200

Changed in lvm2 (Ubuntu):
status: Confirmed → Fix Released

Hrm ??

This updated lvm2 migrated to -release
with the field:
  Recommends: thin-provisioning-tools

But I have not seen an update here that it was promoted and it really seems not in main yet:

root@e:~# apt-cache policy thin-provisioning-tools
thin-provisioning-tools:
  Candidate: 0.7.6-2.1ubuntu1
  Version table:
     0.7.6-2.1ubuntu1 500
        500 http://archive.ubuntu.com/ubuntu eoan/universe amd64 Packages

root@e:~# apt-cache policy lvm2
lvm2:
  Candidate: 2.03.02-2ubuntu6
  Version table:
     2.03.02-2ubuntu6 500
        500 http://archive.ubuntu.com/ubuntu eoan/main amd64 Packages

I'm missing something, shouldn't that be a component mismatch that triggers the handling of this MIR?

The MIR itself (bug 1828887) also got no update.

Ok, it got spotted and became a component mismatch as expected in the tool run tonight.
All is fine again, sorry for the noise.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.