growpart does not work for lvm

Bug #1799953 reported by Scott Moser
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
cloud-init
Expired
Medium
Unassigned
cloud-utils
Fix Released
Medium
Unassigned

Bug Description

Similar to bug 1491148, growpart does not support growing a lvm device.
It would be good if growpart in conjunction with cloud-init could
recognize that the filesystem was on a volume and grow the underlying
partition or volume group.

Related bugs:
 bug 1491148: growpart doesn't work with mdraid devices

Scott Moser (smoser)
Changed in cloud-init:
status: New → Confirmed
Changed in cloud-utils:
status: New → Confirmed
Changed in cloud-init:
status: Confirmed → Triaged
Changed in cloud-utils:
status: Confirmed → Triaged
Changed in cloud-init:
importance: Undecided → Medium
Changed in cloud-utils:
importance: Undecided → Medium
Revision history for this message
Gregory Young (gregoryyoungswi) wrote :

If this is accurate, I suspect the bug is in the cloud-init call to growpart, as I am using 0.31 successfully on CentOS 7 appliances with a LVM. There obviously are further steps needed to extend the LVM PV and LVs, along with the filesystem, but 0.31 seems to work well at growing the raw disk partition (we had to use the project release of 0.31 because the EL7 bundled package includes a much older release that mangles 2TB > GPT disks due to Bug #1259703).

Revision history for this message
Scott Moser (smoser) wrote :
Download full text (3.7 KiB)

It seems all we have to actually do is call 'pvresize' to make this all work.

That would mean after resize, we'd just have to do something like:

  # part_path here assumed to be full path to partition
  if command -v pvs && pvs -o pvname | grep -q "${part_path}$"
    debug "$part_path was a lvm physical volume, using pvresize"
    pvresize $part_path
  fi

For reference, see below. I just did a quick test with loop device (as root):

$ truncate --size 2G /tmp/my.img
$ losetup --find --show /tmp/my.img
/dev/loop5

# add partition 1 to be 1G of the 2G available .. manually
$ fdisk /dev/loop5

# this is necessary because fdisk doesn't do the addpart.
$ partprobe /dev/loop5

$ grep loop5 /proc/partitions
   7 5 2097152 loop5
 259 6 1048576 loop5p1

$ pvcreate /dev/loop5p1
  Physical volume "/dev/loop5p1" successfully created.
$ vgcreate testvg0 /dev/loop5p1
  Volume group "testvg0" successfully created
$ pvdisplay /dev/loop5p1
  --- Physical volume ---
  PV Name /dev/loop5p1
  VG Name testvg0
  PV Size 1.00 GiB / not usable 4.00 MiB
  Allocatable yes
  PE Size 4.00 MiB
  Total PE 255
  Free PE 255
  Allocated PE 0
  PV UUID o3d9dK-x7iA-8jws-Sl99-YFio-NQUs-CMCPuS

$ vgdisplay testvg0
  --- Volume group ---
  VG Name testvg0
  System ID
  Format lvm2
  ...
  Metadata Areas 1
  Metadata Sequence No 1
  VG Access read/write
  VG Status resizable
  MAX LV 0
  Cur LV 0
  Open LV 0
  Max PV 0
  Cur PV 1
  Act PV 1
  VG Size 1020.00 MiB
  PE Size 4.00 MiB
  Total PE 255
  Alloc PE / Size 0 / 0
  Free PE / Size 255 / 1020.00 MiB
  VG UUID N7I14w-i0Tr-7Sqs-8wOQ-7fDZ-l2oL-LdnoS2

$ growpart /dev/loop5 1
CHANGED: partition=1 start=2048 old: size=2097152 end=2099200 new: size=4192223 end=4194271

$ grep loop5 /proc/partitions
   7 5 2097152 loop5
 259 7 2096111 loop5p1

# it didn't grow by itself.
$ pvdisplay /dev/loop5p1 | grep PV.Size
  --- Physical volume ---
  PV Size 1.00 GiB / not usable 4.00 MiB

$ pvresize /dev/loop5p1
  /dev/sda: open failed: No medium found
  Physical volume "/dev/loop5p1" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

# grow it
$ pvresize /dev/loop5p1
  Physical volume "/dev/loop5p1" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

$ pvdisplay /dev/loop5p1
  --- Physical volume ---
  PV Name /dev/loop5p1
  VG Name testvg0
  PV Size <2.00 GiB / not usable 1.98 MiB
  Allocatable yes
  PE Size 4.00 MiB
  Total PE 511
  Free PE 511
  Allocated PE 0
  PV UUID o3d9dK-x7iA-8jws-Sl99-YFio-NQUs-CMCPuS

# and it seems to have taken on the vg
$ sudo vgdisplay testvg0
  --- Volume group ---
  VG Name testvg0
  System ID
 ...

Read more...

Revision history for this message
Scott Moser (smoser) wrote :

As a note... it seems like there is almost no fallout of doing this.
a.) subsequent calls to pvresize (possibly added to users' scripts to finish this) will still exit 0
b.) there is no way other way that i'm aware of to use the space on that pv. (you can't add a second VG on that pv or anything like that). By growing the partition we're already stopping a user from using that space in any way *other* than to add it to the PV.

Revision history for this message
Scott Moser (smoser) wrote :

It seems like to determine if the partition we grew is a lvm pv, we should probably just use 'pvs' with explicit input rather than grep. and probably should use 'lvm' command rather than 'pvs'.

has_cmd() { command -v "$1" >/dev/null ; }

# 'lvm pvs <device>' seems to exit 0 when <device> is a pv and 5 otherwise.
if has_cmd lvm && lvm pvs --readonly -o pv_name "$part_path" >/dev/null 2>&1; then
    debug "$part_path was a lvm pv, going to use pvresize"
    lvm pvresize "$part_path" || echo "bad news, lvm pvresize $part_path failed"
fi

Revision history for this message
Scott Moser (smoser) wrote :
Paride Legovini (paride)
Changed in cloud-utils:
status: Triaged → In Progress
Revision history for this message
Scott Moser (smoser) wrote :

This is fix-released in 0.32.

Changed in cloud-utils:
status: In Progress → Fix Released
Revision history for this message
Scott Moser (smoser) wrote :

Growpart grows a partition (/dev/sda2) on a block device (/dev/sda).

So right now there might be the following scenarios:
 a.) [supported in cloud-init and growpart] root filesystem on partition (/dev/sda2)
 b.) [supported in growpart] root filesystem on an LV (/dev/myvg0/mylv0 -> /dev/dm-1)
      where the LV is part of VG that includes PV that is a partition (/dev/sda1)
      this is partially supported because cloud-init won't do the right thing.  The right
      thing would be to call `growpart /dev/sda 1`
 c.) root filesystem on block device (/dev/sda) - odd, but could happen
 d.) root filesystem on LV that is pat of a VG that includes a PV that is a full disk (/dev/sda)

'c' and 'd' should not need growpart... when the block device is discovered
its size should be correctly identified by the kernel and a 'resizefs' should be needed
but no need for growpart.

The code I added hoped to add support for 'b' in growpart, but should not have broken 'd'.

For cloud-init to add support for 'b', we would need to determine the situation and call growpart on /dev/sda1.

Revision history for this message
James Falcon (falcojr) wrote : Fixed in cloud-init version 21.2.

This bug is believed to be fixed in cloud-init in version 21.2. If this is still a problem for you, please make a comment and set the state back to New

Thank you.

Changed in cloud-init:
status: Triaged → Fix Released
Revision history for this message
Scott Moser (smoser) wrote :

This is *not* fixed in cloud-init 21.2 as reported.
It was fixed at https://github.com/canonical/cloud-init/pull/721
under commit 74fa008bfcd3263eb691cc0b3f7a055b17569f8b.
but then reverted under https://github.com/canonical/cloud-init/pull/887 .

So... marking this Triaged.

Changed in cloud-init:
status: Fix Released → Triaged
Revision history for this message
James Falcon (falcojr) wrote :
Changed in cloud-init:
status: Triaged → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.