Comment 14 for bug 1974100

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

# I have fetched a new cloud image.

$ wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64-disk-kvm.img
$ file jammy-server-cloudimg-amd64-disk-kvm.img
jammy-server-cloudimg-amd64-disk-kvm.img: QEMU QCOW2 Image (v2), 2361393152 bytes

# Then I have extended it to 25G size.

$ qemu-img resize jammy-server-cloudimg-amd64-disk-kvm.img 25G
Image resized.
$ qemu-img info jammy-server-cloudimg-amd64-disk-kvm.img
image: jammy-server-cloudimg-amd64-disk-kvm.img
file format: qcow2
virtual size: 25 GiB (26843545600 bytes)
disk size: 569 MiB
cluster_size: 65536
Format specific information:
    compat: 0.10
    compression type: zlib
    refcount bits: 16

# Right before using it I have checked the image with zerofree

$ sudo zerofree -vn /dev/nbd3p1
0/185630/548091

# Now I replaced the 8 different images (those are my 8 attachement types I used above) with this non-fresh image.

$ for f in /var/lib/libvirt/images/jdisk*.qcow2; do cp -v jammy-server-cloudimg-amd64-disk-kvm.img $f; done

# Then I started my guest, ran growpart and resize2fs on each of it and mounted it (to trigger the lazy allocation)

$ virsh start jdisk
# (in guest now)
$ for d in sda sdb sdc sdd vdc vdd vde vdf; do sudo growpart /dev/$d 1; done
# All report "CHANGED: partition=1 start=227328 old: size=4384735 end=4612063 new: size=52201439 end=52428767"
$ for d in sda sdb sdc sdd vdc vdd vde vdf; do sudo e2fsck -f /dev/${d}1; sudo resize2fs /dev/${d}1; done
$ for d in sda sdb sdc sdd vdc vdd vde vdf; do sudo mkdir /mnt/${d}; sudo mount /dev/${d}1 /mnt/${d}; done
# a while later umount them all
$ for d in sda sdb sdc sdd vdc vdd vde vdf; do sudo umount /mnt/${d}; done

# Now I'm having the same look at each of them in two ways

# A - from the hosts POV for image size

Again the ide/sata attachments without detect_zeroes=on are the ones growing to 1.3G in this case. The rest all stayed at 573M (just 3M more than when downloaded).

# B - from the guests POV for FS behavior

They ALL reported the very same:
3/5976603/6525179

Which means from the guest/FS POV all those disks are primarily free and do not have much crap.
I do not mind so much about the three blocks being off, I understood you @Brent that you had a lot more of such blocks right?

P.S. if anyone does the same as repro, there is some bonus non-fun in mounting via "root=LABEL=cloudimg-rootfs" as all of them will have the very same label :-)