Comment 9 for bug 1974100

Revision history for this message
Christian Ehrhardt  (paelzer) wrote : Re: inode lazy init in a VM fills virtual disk with garbage

Hi Brent,
first of all I'm glad that you got around things via my suggestions.

There is a reason why those device types are usually recommended in newer guides as well as being the default in higher level tools like virt-manager, uvtool, ... is virtio nowadays. It is just more capable.

Thanks for all the cross-checks that you did!
Taking GNS3 out of the picture and comparing just the disk attachments is great.

We can even take cloud-init out of this, all it does is (as instructed) to extend the root FS on those disks. Via "resize_rootfs: noblock" it does so in background, but all it does is ensure to grow the fs to the size of the partition.
Therefore we can as well just mkfs.ext4 (or any other fs of anyones choice) to compare and even use hot add/remove of disks instead of booting into a new system each time.

So (to me) it really seems to only come down to "qemu ide/sata disks do not work well with discard/trim".
That doesn't seem much of a bug, more of a feature request to the known older variant of disk attachments which is unlikely to happen.

So if we would want to compare it feels that it comes down to:
A) disk attachments, features and options (on the hipervisor)
  examples: discard, detect-zeroes, virtio vs scsi vs sata and such
B) filesystems options
  - in your case defined by how the cloud-image was created
  - But to compare fs-effects we could as well mkfs* a few extra disks