Apparmor causes: libvirtError: operation failed: open disk image file failed during online snapshots via Nova API

Bug #1598833 reported by Silvan Kaiser
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Won't Fix
Undecided
Silvan Kaiser

Bug Description

Quobyte CI fails test_snapshot_create_with_volume_in_use (v1 and v2) tests with "libvirtError: operation failed: open disk image file failed" . Nova compute log shows:

2016-07-04 09:06:35.649 7297 DEBUG oslo_concurrency.processutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] u'findmnt --t
arget /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92 --source quobyte@78.46.57.153:7861/quobyteci_test_volume' failed. Not Retrying. execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processu
tils.py:422
2016-07-04 09:06:35.649 7297 DEBUG nova.virt.libvirt.volume.quobyte [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] Mounting v
olume 78.46.57.153:7861/quobyteci_test_volume at mount point /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92 ... mount_volume /opt/stack/nova/nova/virt/libvirt/volume/quobyte.py:53
2016-07-04 09:06:35.649 7297 DEBUG oslo_concurrency.processutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] Running cmd (
subprocess): mount.quobyte 78.46.57.153:7861/quobyteci_test_volume /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92 execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:344
2016-07-04 09:06:36.830 7297 DEBUG oslo_concurrency.processutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] CMD "mount.qu
obyte 78.46.57.153:7861/quobyteci_test_volume /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92" returned: 0 in 1.181s execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:374
2016-07-04 09:06:36.832 7297 INFO nova.virt.libvirt.volume.quobyte [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] Mounted vol
ume: 78.46.57.153:7861/quobyteci_test_volume
2016-07-04 09:06:36.832 7297 DEBUG oslo_concurrency.processutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] Running cmd (
subprocess): getfattr -n quobyte.info /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92 execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:344
2016-07-04 09:06:36.853 7297 DEBUG oslo_concurrency.processutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] CMD "getfattr
 -n quobyte.info /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92" returned: 0 in 0.021s execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:374
2016-07-04 09:06:36.854 7297 DEBUG oslo_concurrency.lockutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] Lock "connect_vo
lume" released by "nova.virt.libvirt.volume.quobyte.connect_volume" :: held 1.232s inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2016-07-04 09:06:36.857 7297 DEBUG nova.virt.libvirt.guest [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] attach device xml:
<disk type="file" device="disk">
  <driver name="qemu" type="qcow2" cache="none" io="native"/>
  <source file="/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.f21ecc73-29a7-44c8-8f09-831e701569d2"/>
  <target bus="virtio" dev="vdb"/>
  <serial>4604d6b5-cf47-4494-8833-15029e30a77e</serial>
</disk>
 attach_device /opt/stack/nova/nova/virt/libvirt/guest.py:251
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] Failed to attach volume at mountpoint: /dev/vdb
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] Traceback (most recent call last):
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1232, in attach_volume
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] guest.attach_device(conf, persistent=True, live=live)
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 252, in attach_device
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] self._domain.attachDeviceFlags(device_xml, flags=flags)
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] result = proxy_call(self._autowrap, f, *args, **kwargs)
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] rv = execute(f, *args, **kwargs)
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] six.reraise(c, e, tb)
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] rv = meth(*args, **kwargs)
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 517, in attachDeviceFlags
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615] libvirtError: operation failed: open disk image file failed
2016-07-04 09:06:38.512 7297 ERROR nova.virt.libvirt.driver [instance: 0713aaa8-f630-4c7f-b541-616ce6541615]
2016-07-04 09:06:38.513 7297 DEBUG oslo_concurrency.lockutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] Lock "connect_volume" acquired by "nova.virt.libvirt.volume.quobyte.disconnect_volume" :: waited 0.000s inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
2016-07-04 09:06:38.514 7297 DEBUG oslo_concurrency.processutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] Running cmd (subprocess): findmnt --target /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92 --source quobyte@78.46.57.153:7861/quobyteci_test_volume execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:344
2016-07-04 09:06:38.520 7297 DEBUG oslo_concurrency.processutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] CMD "findmnt --target /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92 --source quobyte@78.46.57.153:7861/quobyteci_test_volume" returned: 0 in 0.006s execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:374
2016-07-04 09:06:38.521 7297 DEBUG oslo_concurrency.processutils [req-84dbb010-c105-4b5b-8d1d-41028c114307 tempest-VolumesV2SnapshotTestJSON-1508672341 tempest-VolumesV2SnapshotTestJSON-1508672341] Running cmd (subprocess): umount.quobyte /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92 execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:344

libvirt daemon logs shows a permission denied error:
2016-07-04 09:06:37.885+0000: 24010: debug : qemuSetupDiskPathAllow:60 : Process path /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.2c3539c0-c6ed-4c72-8e65-6afd
dda344df for disk
2016-07-04 09:06:37.885+0000: 24010: debug : qemuSetupDiskPathAllow:60 : Process path /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e for disk
2016-07-04 09:06:37.886+0000: 24010: debug : virCommandRunAsync:2282 : About to run /usr/lib/libvirt/virt-aa-helper -p 0 -r -u libvirt-0713aaa8-f630-4c7f-b541-616ce6541615 -f /mnt/quobyte-volume/abfa1002557ab2b2
1ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.f21ecc73-29a7-44c8-8f09-831e701569d2
2016-07-04 09:06:37.886+0000: 24010: debug : virCommandRunAsync:2285 : Command result 0, with PID 15693
2016-07-04 09:06:38.174+0000: 24010: debug : virCommandRun:2142 : Result status 0, stdout: '' stderr: ''
2016-07-04 09:06:38.174+0000: 24010: debug : qemuDomainPCIAddressGetNextSlot:2270 : PCI slot 0000:00:01 already in use
2016-07-04 09:06:38.174+0000: 24010: debug : qemuDomainPCIAddressGetNextSlot:2270 : PCI slot 0000:00:02 already in use
2016-07-04 09:06:38.174+0000: 24010: debug : qemuDomainPCIAddressGetNextSlot:2270 : PCI slot 0000:00:03 already in use
2016-07-04 09:06:38.174+0000: 24010: debug : qemuDomainPCIAddressGetNextSlot:2270 : PCI slot 0000:00:04 already in use
2016-07-04 09:06:38.174+0000: 24010: debug : qemuDomainPCIAddressGetNextSlot:2313 : Found free PCI slot 0000:00:05
2016-07-04 09:06:38.174+0000: 24010: debug : qemuDomainPCIAddressReserveAddr:2114 : Reserving PCI slot 0000:00:05.0 (multifunction='off')
2016-07-04 09:06:38.174+0000: 24010: debug : qemuDomainObjEnterMonitorInternal:1278 : Entering monitor (mon=0x7f516c002820 vm=0x7f515c003540 name=instance-0000000f)
2016-07-04 09:06:38.174+0000: 24010: debug : qemuMonitorAddDrive:3011 : mon=0x7f516c002820 drive=file=/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.f21ecc73-29a
7-44c8-8f09-831e701569d2,if=none,id=drive-virtio-disk1,format=qcow2,serial=4604d6b5-cf47-4494-8833-15029e30a77e,cache=none,aio=native
2016-07-04 09:06:38.174+0000: 24010: debug : qemuMonitorJSONAddDrive:3126 : drive_add command not found, trying HMP
2016-07-04 09:06:38.174+0000: 24010: debug : qemuMonitorJSONCommandWithFd:264 : Send command '{"execute":"human-monitor-command","arguments":{"command-line":"drive_add dummy file=/mnt/quobyte-volume/abfa1002557a
b2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.f21ecc73-29a7-44c8-8f09-831e701569d2,if=none,id=drive-virtio-disk1,format=qcow2,serial=4604d6b5-cf47-4494-8833-15029e30a77e,cache=none,aio=native"
},"id":"libvirt-16"}' for write with FD -1
2016-07-04 09:06:38.174+0000: 24010: debug : qemuMonitorSend:959 : QEMU_MONITOR_SEND_MSG: mon=0x7f516c002820 msg={"execute":"human-monitor-command","arguments":{"command-line":"drive_add dummy file=/mnt/quobyte-
volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.f21ecc73-29a7-44c8-8f09-831e701569d2,if=none,id=drive-virtio-disk1,format=qcow2,serial=4604d6b5-cf47-4494-8833-15029e30a77e,cac
he=none,aio=native"},"id":"libvirt-16"}
 fd=-1
2016-07-04 09:06:38.176+0000: 24004: debug : qemuMonitorIOWrite:504 : QEMU_MONITOR_IO_WRITE: mon=0x7f516c002820 buf={"execute":"human-monitor-command","arguments":{"command-line":"drive_add dummy file=/mnt/quoby
te-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.f21ecc73-29a7-44c8-8f09-831e701569d2,if=none,id=drive-virtio-disk1,format=qcow2,serial=4604d6b5-cf47-4494-8833-15029e30a77e,
cache=none,aio=native"},"id":"libvirt-16"}
 len=350 ret=350 errno=11
2016-07-04 09:06:38.244+0000: 24004: debug : qemuMonitorIOProcess:396 : QEMU_MONITOR_IO_PROCESS: mon=0x7f516c002820 buf={"return": "could not open disk image /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/
volume-4604d6b5-cf47-4494-8833-15029e30a77e.f21ecc73-29a7-44c8-8f09-831e701569d2: Could not open backing file: Could not open '/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-
15029e30a77e.2c3539c0-c6ed-4c72-8e65-6afddda344df': Permission denied\r\n", "id": "libvirt-16"}
 len=399
2016-07-04 09:06:38.244+0000: 24004: debug : qemuMonitorJSONIOProcessLine:157 : Line [{"return": "could not open disk image /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-150
29e30a77e.f21ecc73-29a7-44c8-8f09-831e701569d2: Could not open backing file: Could not open '/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.2c3539c0-c6ed-4c72-8e
65-6afddda344df': Permission denied\r\n", "id": "libvirt-16"}]
2016-07-04 09:06:38.244+0000: 24004: debug : qemuMonitorJSONIOProcessLine:177 : QEMU_MONITOR_RECV_REPLY: mon=0x7f516c002820 reply={"return": "could not open disk image /mnt/quobyte-volume/abfa1002557ab2b21ec218a
86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.f21ecc73-29a7-44c8-8f09-831e701569d2: Could not open backing file: Could not open '/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-
4494-8833-15029e30a77e.2c3539c0-c6ed-4c72-8e65-6afddda344df': Permission denied\r\n", "id": "libvirt-16"}
2016-07-04 09:06:38.244+0000: 24004: debug : qemuMonitorJSONIOProcess:226 : Total used 399 bytes out of 399 available in buffer
2016-07-04 09:06:38.244+0000: 24010: debug : qemuMonitorJSONCommandWithFd:269 : Receive command reply ret=0 rxObject=0x7f518cff16e0
2016-07-04 09:06:38.244+0000: 24010: error : qemuMonitorTextAddDrive:2611 : operation failed: open disk image file failed
2016-07-04 09:06:38.244+0000: 24010: debug : qemuDomainObjExitMonitorInternal:1301 : Exited monitor (mon=0x7f516c002820 vm=0x7f515c003540 name=instance-0000000f)
2016-07-04 09:06:38.244+0000: 24010: debug : virCommandRunAsync:2282 : About to run /usr/lib/libvirt/virt-aa-helper -p 0 -r -u libvirt-0713aaa8-f630-4c7f-b541-616ce6541615
2016-07-04 09:06:38.245+0000: 24010: debug : virCommandRunAsync:2285 : Command result 0, with PID 15695
2016-07-04 09:06:38.511+0000: 24010: debug : virCommandRun:2142 : Result status 0, stdout: '' stderr: ''
2016-07-04 09:06:38.512+0000: 24010: debug : qemuTeardownDiskPathDeny:102 : Process path /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.f21ecc73-29a7-44c8-8f09-8
31e701569d2 for disk
2016-07-04 09:06:38.512+0000: 24010: debug : qemuTeardownDiskPathDeny:102 : Process path /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e.2c3539c0-c6ed-4c72-8e65-6
afddda344df for disk
2016-07-04 09:06:38.512+0000: 24010: debug : qemuTeardownDiskPathDeny:102 : Process path /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4604d6b5-cf47-4494-8833-15029e30a77e for disk
2016-07-04 09:06:38.512+0000: 24010: debug : qemuDomainObjEndJob:1206 : Stopping job: modify (async=none vm=0x7f515c003540 name=instance-0000000f)
2016-07-04 09:06:38.512+0000: 24010: debug : virDomainFree:2270 : dom=0x7f517417e660, (VM: name=instance-0000000f, uuid=0713aaa8-f630-4c7f-b541-616ce6541615)
2016-07-04 09:06:38.565+0000: 24004: debug : virNetlinkEventCallback:347 : dispatching to max 0 clients, called from event watch 6
2016-07-04 09:06:38.565+0000: 24004: debug : virNetlinkEventCallback:360 : event not handled.
2016-07-04 09:06:41.789+0000: 24004: debug : qemuMonitorIOProcess:396 : QEMU_MONITOR_IO_PROCESS: mon=0x7f516c002820 buf={"timestamp": {"seconds": 1467623201, "microseconds": 789082}, "event": "NIC_RX_FILTER_CHANGED", "data": {"name": "net0", "path": "/machine/peripheral/net0/virtio-backend"}}
 len=175

Silvan Kaiser (2-silvan)
Changed in nova:
assignee: nobody → Silvan Kaiser (2-silvan)
Revision history for this message
Silvan Kaiser (2-silvan) wrote :

Seems to me Nova creates a snapshot file but afterwards libvirt does not have the permission to attach it to the instance. However i currently cannot really nail down the point where exactly the snapshot is generated in order to verify how the snapshot file is generated, I'm happy for input on this!

tags: added: snapshot
Revision history for this message
Silvan Kaiser (2-silvan) wrote :
Download full text (3.3 KiB)

This is an apparmor issue. Apparmor denies attaching the snapshotted volume to a guest, example log output from manual runs of the given test show:

Jul 14 13:33:35 manualnovaci kernel: [ 7317.047425] type=1400 audit(1468503215.581:148): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-7cf6b321-0188-4a3c-8e2a-cda7f944405f" pid
=31697 comm="apparmor_parser"
Jul 14 13:33:35 manualnovaci kernel: [ 7317.269093] type=1400 audit(1468503215.805:149): apparmor="DENIED" operation="open" profile="libvirt-7cf6b321-0188-4a3c-8e2a-cda7f944405f" name="/mnt/quobyte-volume/abfa10
02557ab2b21ec218a86487dd92/volume-c26c8016-2e42-48bf-a112-f14c62d48f9a.6fb6e32e-6a2f-47ba-b9d3-a36d55c21db3" pid=31439 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
Jul 14 13:33:35 manualnovaci kernel: [ 7317.269112] type=1400 audit(1468503215.805:150): apparmor="DENIED" operation="open" profile="libvirt-7cf6b321-0188-4a3c-8e2a-cda7f944405f" name="/mnt/quobyte-volume/abfa10
02557ab2b21ec218a86487dd92/volume-c26c8016-2e42-48bf-a112-f14c62d48f9a.6fb6e32e-6a2f-47ba-b9d3-a36d55c21db3" pid=31439 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
Jul 14 13:33:35 manualnovaci kernel: [ 7317.269138] type=1400 audit(1468503215.805:151): apparmor="DENIED" operation="open" profile="libvirt-7cf6b321-0188-4a3c-8e2a-cda7f944405f" name="/mnt/quobyte-volume/abfa10
02557ab2b21ec218a86487dd92/volume-c26c8016-2e42-48bf-a112-f14c62d48f9a.6fb6e32e-6a2f-47ba-b9d3-a36d55c21db3" pid=31439 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
Jul 14 13:33:36 manualnovaci kernel: [ 7318.080239] type=1400 audit(1468503216.617:152): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-211d6603-8366-4e0f-9a89-44a23704597a" pid
=31707 comm="apparmor_parser"
Jul 14 13:33:36 manualnovaci kernel: [ 7318.333090] type=1400 audit(1468503216.869:153): apparmor="DENIED" operation="open" profile="libvirt-211d6603-8366-4e0f-9a89-44a23704597a" name="/mnt/quobyte-volume/abfa10
02557ab2b21ec218a86487dd92/volume-d08277bd-1380-49fe-a94e-c4ba38f5c976.4966661d-4491-4ea9-aa25-30b9d2aa4cf5" pid=31476 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
Jul 14 13:33:36 manualnovaci kernel: [ 7318.333106] type=1400 audit(1468503216.869:154): apparmor="DENIED" operation="open" profile="libvirt-211d6603-8366-4e0f-9a89-44a23704597a" name="/mnt/quobyte-volume/abfa10
02557ab2b21ec218a86487dd92/volume-d08277bd-1380-49fe-a94e-c4ba38f5c976.4966661d-4491-4ea9-aa25-30b9d2aa4cf5" pid=31476 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
Jul 14 13:33:36 manualnovaci kernel: [ 7318.333131] type=1400 audit(1468503216.869:155): apparmor="DENIED" operation="open" profile="libvirt-211d6603-8366-4e0f-9a89-44a23704597a" name="/mnt/quobyte-volume/abfa10
02557ab2b21ec218a86487dd92/volume-d08277bd-1380-49fe-a94e-c4ba38f5c976.4966661d-4491-4ea9-aa25-30b9d2aa4cf5" pid=31476 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
Jul 14 13:33:37 manualnovaci kernel: [ 7318.840828] type=1400 audit(1468503217.377:156): apparmor="STATUS" ...

Read more...

Revision history for this message
Silvan Kaiser (2-silvan) wrote :

Manually running test_snapshot_create_with_volume_in_use runs fine with deactivated apparmor (i did this in order to verify apparmor is the issue).

Silvan Kaiser (2-silvan)
summary: - libvirtError: operation failed: open disk image file failed during
- online snapshots via Nova API
+ Apparmor causes: libvirtError: operation failed: open disk image file
+ failed during online snapshots via Nova API
tags: added: apparmor
Revision history for this message
Silvan Kaiser (2-silvan) wrote :

The issue occurs with the sVirt integrated protection between guest VMs. Apparmor can be used with the workaround of configuring:

security_driver = "none"

in /etc/libvirt/qemu.conf . This allows basic host protection without triggering the snapshot issue documented herein.

Revision history for this message
Silvan Kaiser (2-silvan) wrote :

Here's another example for reproducing this manually with a _running_ guest:

1) create a volume -> works
2) attach volume to guest -> works
3) detach volume from guest -> works
4) create snapshot from volume -> works
5) attach volume(!) to guest -> FAIL
6) delete snapshot -> works
7) attach volume to guest -> works(!)

When running this procedure with a stopped guest things work fine, e.g.:

[..]
4) create snapshot from volume -> works
5) attach volume to (shutdown) guest -> works (checked by powering up and looking for the volumes device)

Revision history for this message
Silvan Kaiser (2-silvan) wrote :

So, i can see libvirt calling the virt-aa-helper for the volume to be attached in the libvirt logs. But shortly afterwards things fail when libvirt tries to access the volumes backing file (at this point remember that a snapshot is done by creating a new file for the volume and the original file becoming the new files backing file).
Seems virt-aa-helper is not adding the volumes backing file to the apparmor profile causing this issue?
But shouldn't this happen in other setups, too?

Revision history for this message
Silvan Kaiser (2-silvan) wrote :
Download full text (6.1 KiB)

Example log excerpt (libvirt log) for trying to attach a volume with snapshot to a running guest VM (manual test):

2016-07-27 10:31:23.135+0000: 16633: debug : qemuDomainPCIAddressGetNextSlot:2313 : Found free PCI slot 0000:00:05
2016-07-27 10:31:23.135+0000: 16633: debug : qemuDomainPCIAddressReserveAddr:2114 : Reserving PCI slot 0000:00:05.0 (multifunction='off')
2016-07-27 10:31:23.193+0000: 16633: debug : qemuSetupDiskPathAllow:60 : Process path /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-605eb107-baf5-4a10-a2b9-60a3a77f2fc0.1a65bc67-f300-4927-9231-d480
5690e81b for disk
2016-07-27 10:31:23.194+0000: 16633: debug : qemuSetupDiskPathAllow:60 : Process path /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-605eb107-baf5-4a10-a2b9-60a3a77f2fc0.2f3c2d0e-5315-48f3-9fc7-7e39
2c7958e1 for disk
2016-07-27 10:31:23.194+0000: 16633: debug : qemuSetupDiskPathAllow:60 : Process path /mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-605eb107-baf5-4a10-a2b9-60a3a77f2fc0 for disk
2016-07-27 10:31:23.197+0000: 16633: debug : virCommandRunAsync:2282 : About to run /usr/lib/libvirt/virt-aa-helper -p 0 -r -u libvirt-4df0bca7-5e5c-47b8-9c27-d40277351fb0 -f /mnt/quobyte-volume/abfa1002557ab2b2
1ec218a86487dd92/volume-605eb107-baf5-4a10-a2b9-60a3a77f2fc0.1a65bc67-f300-4927-9231-d4805690e81b
2016-07-27 10:31:23.197+0000: 16633: debug : virCommandRunAsync:2285 : Command result 0, with PID 10895
2016-07-27 10:31:23.521+0000: 16633: debug : virCommandRun:2142 : Result status 0, stdout: '' stderr: ''
2016-07-27 10:31:23.521+0000: 16633: debug : qemuDomainPCIAddressGetNextSlot:2270 : PCI slot 0000:00:01 already in use
2016-07-27 10:31:23.521+0000: 16633: debug : qemuDomainPCIAddressGetNextSlot:2270 : PCI slot 0000:00:02 already in use
2016-07-27 10:31:23.521+0000: 16633: debug : qemuDomainPCIAddressGetNextSlot:2270 : PCI slot 0000:00:03 already in use
2016-07-27 10:31:23.521+0000: 16633: debug : qemuDomainPCIAddressGetNextSlot:2270 : PCI slot 0000:00:04 already in use
2016-07-27 10:31:23.521+0000: 16633: debug : qemuDomainPCIAddressGetNextSlot:2313 : Found free PCI slot 0000:00:05
2016-07-27 10:31:23.521+0000: 16633: debug : qemuDomainPCIAddressReserveAddr:2114 : Reserving PCI slot 0000:00:05.0 (multifunction='off')
2016-07-27 10:31:23.521+0000: 16633: debug : qemuDomainObjEnterMonitorInternal:1278 : Entering monitor (mon=0x7fc904181380 vm=0x7fc8f80117a0 name=instance-00000015)
2016-07-27 10:31:23.521+0000: 16633: debug : qemuMonitorAddDrive:3011 : mon=0x7fc904181380 drive=file=/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-605eb107-baf5-4a10-a2b9-60a3a77f2fc0.1a65bc67-f30
0-4927-9231-d4805690e81b,if=none,id=drive-virtio-disk1,format=qcow2,serial=605eb107-baf5-4a10-a2b9-60a3a77f2fc0,cache=none,aio=native
2016-07-27 10:31:23.521+0000: 16633: debug : qemuMonitorJSONAddDrive:3126 : drive_add command not found, trying HMP
2016-07-27 10:31:23.521+0000: 16633: debug : qemuMonitorJSONCommandWithFd:264 : Send command '{"execute":"human-monitor-command","arguments":{"command-line":"drive_add dummy file=/mnt/quobyte-volume/abfa1002557a
b2b21ec218a86487dd92/volume-605eb107-baf5-4a10-a2b9-60a3a77f2fc0.1a65bc67-f300-4927-9231-d...

Read more...

Revision history for this message
Silvan Kaiser (2-silvan) wrote :
Download full text (12.5 KiB)

More apparmor profile details from manual testing:

The following profiles where copied while the given volume:
/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4573bf40-c033-44d4-9231-8afd03e6286e.c0c1654e-ac4b-487f-8d1e-b29d59fa7ce2

had the following snapshot backing file:
/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4573bf40-c033-44d4-9231-8afd03e6286e

VM is shutdown and volume is attached:
# DO NOT EDIT THIS FILE DIRECTLY. IT IS MANAGED BY LIBVIRT.
  "/var/log/libvirt/**/instance-0000000d.log" w,
  "/var/lib/libvirt/**/instance-0000000d.monitor" rw,
  "/var/run/libvirt/**/instance-0000000d.pid" rwk,
  "/run/libvirt/**/instance-0000000d.pid" rwk,
  "/var/run/libvirt/**/*.tunnelmigrate.dest.instance-0000000d" rw,
  "/run/libvirt/**/*.tunnelmigrate.dest.instance-0000000d" rw,
  "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/disk" rw,
  "/opt/stack/data/nova/instances/_base/2ea96822a43c04748331603dcfb7c24104d0c175" r,
  # don't audit writes to readonly files
  deny "/opt/stack/data/nova/instances/_base/2ea96822a43c04748331603dcfb7c24104d0c175" w,
  "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/disk.config" r,
  # don't audit writes to readonly files
  deny "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/disk.config" w,
  "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/console.log" rw,
  "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/console.log" rw,
  "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/kernel" r,
  # don't audit writes to readonly files
  deny "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/kernel" w,
  "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/ramdisk" r,
  # don't audit writes to readonly files
  deny "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/ramdisk" w,
  /dev/vhost-net rw,

VM is started after volume has been attached while VM was shut down:
# DO NOT EDIT THIS FILE DIRECTLY. IT IS MANAGED BY LIBVIRT.
  "/var/log/libvirt/**/instance-0000000d.log" w,
  "/var/lib/libvirt/**/instance-0000000d.monitor" rw,
  "/var/run/libvirt/**/instance-0000000d.pid" rwk,
  "/run/libvirt/**/instance-0000000d.pid" rwk,
  "/var/run/libvirt/**/*.tunnelmigrate.dest.instance-0000000d" rw,
  "/run/libvirt/**/*.tunnelmigrate.dest.instance-0000000d" rw,
  "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/disk" rw,
  "/opt/stack/data/nova/instances/_base/2ea96822a43c04748331603dcfb7c24104d0c175" r,
  # don't audit writes to readonly files
  deny "/opt/stack/data/nova/instances/_base/2ea96822a43c04748331603dcfb7c24104d0c175" w,
  "/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4573bf40-c033-44d4-9231-8afd03e6286e.c0c1654e-ac4b-487f-8d1e-b29d59fa7ce2" rw,
  "/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4573bf40-c033-44d4-9231-8afd03e6286e" r,
  # don't audit writes to readonly files
  deny "/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-4573bf40-c033-44d4-9231-8afd03e6286e" w,
  "/opt/stack/data/nova/instances/7641ded0-42f6-443e-b376-db1c9c738ed6/disk.config" r,
  # d...

Revision history for this message
Sean Dague (sdague) wrote :

I don't think this is really a nova issue, apparmor config is beyond the project scope.

Changed in nova:
status: New → Won't Fix
Revision history for this message
Silvan Kaiser (2-silvan) wrote :

yep, correct. It's a libvirt issue with libvirts apparmor integration.

I filed a libvirt bug at https://bugzilla.redhat.com/show_bug.cgi?id=1361592

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.