Hi,
I was asked about a very similar case and needed to debug it.
So I thought I give the issue reported here a try how it looks like today.
virt install creates a guest with a command like:
-drive file=/dev/LVMpool/test-snapshot-virtinst,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
## Solved - confusion of pre-existing other apparmor rules as pools are unsupported by libvirt/apparmor ##
In this case virt-inst pre-creates a VG of a given size and passes the guest just that.
This is the different to using the actual pool feature.
With that I'm "ok" that it doesn't need a special apparmor rule already.
From the guest/apparmor point of view when the guest starts the path is known and added to the guests profile.
(With a pool ref in the guest that would not have worked)
## experiments - setup ##
Lets define a guest which has a qcow and a lvm disk that we can snapshot for experiments.
We will use the disk created in the test above, but in a uvtool guest to get rid of all virt-install special quirks.
The other disk is just a qcow file.
$ sudo qemu-img create -f qcow2 /var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow 1G
Formatting '/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow', fmt=qcow2 size=1073741824 cluster_size=65536 lazy_refcounts=off refcount_bits=16
The config for those looks like:
qcow:
CMD: -drive file=/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow,format=qcow2,if=none,id=drive-virtio-disk2
apparmor: "/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow" rwk,
disk:
CMD: -drive file=/dev/LVMpool/test-snapshot-virtinst,format=raw,if=none,id=drive-virtio-disk3
apparmor: "/dev/dm-11" rwk,
which is a match as
$ ll /dev/LVMpool/test-snapshot-virtinst
lrwxrwxrwx 1 root root 8 Sep 11 05:14 /dev/LVMpool/test-snapshot-virtinst -> ../dm-11
## experiments - snapshotting ##
Details of the spec see: https://libvirt.org/formatdomain.html
Snapshot of just the qcow file:
$ virsh snapshot-create-as --print-xml --domain eoan-snapshot --disk-only --atomic --diskspec vda,snapshot=no --diskspec vdb,snapshot=no --diskspec vdc,file=/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow,snapshot=external --diskspec vdd,snapshot=no
$ virsh snapshot-list eoan-snapshot
Name Creation Time State
------------------------------------------------------------
1568196836 2019-09-11 06:13:56 -0400 disk-snapshot
The snapshot got added to the apparmor profile:
"/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow" rwk,
The position shows that this was done with the "append" feature of virt-aa-helper.
So it did not re-parse the guest but just add one more entry (as it would do on hotplug).
I'm not trying to LVM-snapshot as that seems not what was asked for.
And further LVM would have own capabilties to do so.
## check status after snapshot ##
The guest now has the new snapshot as main file and the old one as backing file (COW-chain)
Please do mind that this is the "runtime view", once shut down you'll only see the new snapshot.
This is confirmed by the metadata in the qcow file.
$ sudo qemu-img info /var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow
image: /var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 196K
cluster_size: 65536
backing file: /var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
## restart guest ##
XML of inactive guest as expected:
The guest starts just fine (with still all 4 disk entries):
../dm-11
lrwxrwxrwx 1 root root 8 Sep 11 06:44 test-snapshot-virtinst-snap -> ../dm-14
Still dm-11 that matches apparmor
"/dev/dm-11" rwk,
Checking if it is an issue to restart the guest with the LVM snapshot attached.
No, shutdown and start work fine still.
With that said I think we can close this old bug nowadays.
P.S. There is a case a friend of mine reported with qcow snapshots on LVM which sounds odd and is broken.
This I'll track down in another place as it has nothing to do with the issue that was reported here.