Apparmor blocks access to /dev/vhost-net

Bug #1815910 reported by daniel.pawlik on 2019-02-14
This bug affects 5 people
Affects Status Importance Assigned to Milestone
OpenStack nova-compute charm
libvirt (Ubuntu)

Bug Description

During attach new interface to the instance, I have an error (from dmesg):

    [1387677.245725] audit: type=1400 audit(1550147444.575:10991): apparmor="DENIED" operation="file_receive" profile="libvirt-fc5b1ccd-6d5c-459e-8d6b-b98c26df504e" name="/dev/vhost-net" pid=36309 comm="qemu-system-x86" requested_mask="wr" denied_mask="wr" fsuid=64055 ouid=0

Workaround is adding to: /etc/apparmor.d/abstractions/libvirt-qemu

    /dev/vhost-net rw,

More info:
Error that I get in nova-compute:

libvirtError: internal error: unable to execute QEMU command 'getfd': No file descriptor supplied via SCM_RIGHTS

Libvirt version: 4.0.0-1ubuntu8.6
Ubuntu release: Bionic

Should libvirt be able to have access to /dev/vhost-net ?

Hi Daniel,
thank you for your report and your help making Ubuntu better.

Your workaround is exactly the right way flag your system for your special local configuration.
In later releases there is a file at:
Which shall help to add a rule without conflicts on conffiles at package updates.

I assume that you have started the domain without any vhost-net device, but then hotplugged one.
The rule for /dev/vhost-net is added on guest definition if a network device has VIR_DOMAIN_NET_BACKEND_TYPE_QEMU and virDomainNetIsVirtioModel.

That means if you start without any such device it won't be added at startup and late rat hotplug you hit the reported error.

I'd need to check if any of the relabeling calls that we have registered at virAppArmorSecurityDriver could be made detecting a vhost device and adding that path in addition to what it was actually called for - maybe the FD for the vhost-dev gets a labeling call?

For now please confirm my assumption on your setup before I hunt a red herring in the code :-)

daniel.pawlik (daniel-pawlik) wrote :

Thanks Christian for replying.

Yes, I spawn instance without network interface and after a while I would like to add it to the VM so it raises me an error.

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in libvirt (Ubuntu):
status: New → Confirmed

1. Starting a new guest from which I dropped any network (e.g. created via uvtool)

2. Check the rendered profile - as expected there is no /dev/vhost-net
$ cat /etc/apparmor.d/libvirt/$(virsh dominfo disco-test-vhost | awk '/^Security label:/ {print $3}').files
  "/var/log/libvirt/**/disco-test-vhost.log" w,
  "/var/lib/libvirt/qemu/domain-disco-test-vhost/monitor.sock" rw,
  "/var/lib/libvirt/qemu/domain-1-disco-test-vhost/*" rw,
  "/var/run/libvirt/**/" rwk,
  "/run/libvirt/**/" rwk,
  "/var/run/libvirt/**/*.tunnelmigrate.dest.disco-test-vhost" rw,
  "/run/libvirt/**/*.tunnelmigrate.dest.disco-test-vhost" rw,
  "/var/lib/uvtool/libvirt/images/disco-test-vhost.qcow" rwk,
  "/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTkuMDQ6YW1kNjQgMjAxOTAyMTA=" rk,
  "/var/lib/uvtool/libvirt/images/disco-test-vhost-ds.qcow" rwk,
  "/var/lib/libvirt/qemu/domain-1-disco-test-vhost/{,**}" rwk,
  "/var/lib/libvirt/qemu/channel/target/domain-1-disco-test-vhost/{,**}" rwk,
  "/var/lib/libvirt/qemu/domain-1-disco-test-vhost/master-key.aes" rwk,

3. try to hot add a vitio vhost-net device (and track dmesg)
$ cat net.xml
    <interface type='network'>
      <mac address='52:54:00:f6:9a:47'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
$ virsh attach-device disco-test-vhost net.xml
error: Failed to attach device from net.xml
error: internal error: unable to execute QEMU command 'getfd': No file descriptor supplied via SCM_RIGHTS

And dmesg reports:
audit: type=1400 audit(1550159090.042:133): apparmor="DENIED" operation="file_receive" profile="libvirt-236ce1b4-61fd-4aa5-8031-a4df09de5b32" name="/dev/vhost-net" pid=22374 comm="qemu-system-x86" requested_mask="wr" denied_mask="wr" fsuid=64055 ouid=0

That should be exactly your error, now lets check what security labeling calls are made ...

Use the full list as breakpoints yoou can easily get from source like
$ tail -n 60 src/security/security_apparmor.c | awk '/ = App/ {gsub(",",""); printf("b %s\n", $3);}'

But the only hit we get is the FD call as expected:
Thread 2 "libvirtd" hit Breakpoint 31, AppArmorSetFDLabel (mgr=0x7f6e3c00b0a0, def=0x7f6e3c0bbca0, fd=21) at ../../../src/security/security_apparmor.c:1139

We don't know really that we are getting a vhost-net at this point.
We get the FD that we pass like:
map that to
and finally resolve that to

That is all we get, afterwards no more labelling calls.
I think the assumption "if one is adding /dev/net/tun he might use vhost so also add /dev/vhost-net" is awkward.

I don't see other good places to catch that dynamic, but then the solution might be quite different. It was added by [1] quite a while back, but I'd like to get in touch with security if /dev/vhost-net is still considered dangerous, maybe things are more mature and we can allow it in general now?

I'll send a request now, but I also will see them next week so I can discuss it there in case there is no reply.


Changed in libvirt (Ubuntu):
status: Confirmed → Triaged

@security - for the issue outlined above I'd like to suggest to add /dev/vhost-net statically in src/security/apparmor/libvirt-qemu (and drop the detection in virt-aa-helper as it is superfluous).

I'd want to have your input if you consider /dev/vhost-net safe enough these days to do so?

As discussed I just send the patch and you can comment there :-)
So upstream for discussion now ...

daniel.pawlik (daniel-pawlik) wrote :

Thanks Christian.
So I will wait for merge your patch.

Unless the security Team really dislikes the idea of opening it up I'd want to at least SRu this change to Bionic - further back I'm not so sure (the further we go back the less hardening/fixes the interface will have).

Adding bug tasks for that ...

Changed in libvirt (Ubuntu Bionic):
status: New → Triaged
Changed in libvirt (Ubuntu Cosmic):
status: New → Triaged
Changed in libvirt (Ubuntu):
status: Triaged → In Progress

Thanks Jamie, that is exactly why I asked about it.

Here is a call to the bug reporter and other affected users that might watch the bug.
Please chime in on the upstream discussion to clarify how "common it is to start a VM with no network devices and then hotplug one".

Based on that will be the decision to add it as a static rule or if it shall stay as is and the few users/usecases needing it would then be encouraged to add it as local override.

If you can't/won't participate on the list post it here and I can carry it over as needed.

David Negreira (dnegreira) wrote :

I am chiming in on behalf of our customer that is affected by this bug and patched his AppArmor profile manually for the time being to solve his issue.

I chatted with David and got this when asking why this would be a common case.
  <dnegreira> cpaelzer: yep, it is the default setup of the latest version of the nova-compute charm

That said, if this is the only reason then we might also juts change the charm.
But it clearly identifies that we want to pull in the charmers for their opinion - adding a bug task for that.

James Page (james-page) wrote :

This is not the default otherwise no ones cloud would actually work right now.

Instances are created with a network interface in paused state - at which point the interface plugging in neutron is executed; once that's completed the instance transitions to running and boots.

That's the default behaviour with neutron/ml2/ovs driver.

Are you using a different neutron driver?

Changed in charm-nova-compute:
status: New → Invalid

Yeah, thanks James that is just what I have assumed.
And the paused device is enough to have the rule added as expected.

@David and other reporters - can you provide a convincing case "due to what" or "why" you are driving it the other way with no device at all (initially)? As mentioned that is what is needed for the upstream discussion to add the rule by default.

daniel.pawlik (daniel-pawlik) wrote :

Thanks @Christian for continuing the discussion.

@James Page, I also use neutron ml2 ovs driver. I understand that in default nova policy, only administrator can spawn instance without any interface, but if someone else can "tune" the policy, he/she will have a problem.

Nothing else came up in a while, I have forwarded the comments to the mailing list and now wait for a consensus there.

The same behavior is reproduced in case when VM was created with network interface, interface detached, VM stopped (virsh destroy) and started again without network. This usecase does not need OpenStack admin privileges.

Paul Collins (pjdc) wrote :

I tripped over this problem recently with an instance that needed its neutron port recreated.

At some point the apparmor profile was regenerated while the instance lacked a network interface and so the permission required to reattach it was lost.

I ended up editing and reloading the profile manually, although re-running virt-aa-helper in a suitable manner would probably have been simpler and safer -- had I known it existed.

Jamie Strandboge (jdstrand) wrote :

I've stated my preference for upstream:

For Ubuntu, if the issue is causing a lot of issues, I'm open to a distro patch that enables the access by default on the condition that /etc/libvirt/qemu.conf is adjusted to have a comment in the vicinity of:

#user = "root"
#group = "root"

with something along the lines of the following:

# By default libvirt runs VMs as non-root and uses AppArmor profiles
# to provide host protection and VM isolation. While AppArmor
# continues to provide this protection when the VMs are running as
# root, /dev/vhost-net access is allowed by default in the AppArmor
# security policy, so malicious VMs running as root would have
# direct access to this file. If changing this to run as root, you
# may want to remove this access from
# /etc/apparmor.d/abstractions/libvirt-qemu. For more information,
# see:
#user = "root"
#group = "root"

Thanks Jamie for providing an approach that is a compromise between upstreams needs and Ubuntu as a downstream - as well as at the same time being a tradeoff between comfort and security.

I'll implement this as a downstream change in 19.10:
- add the comment to the config (thanks for writing it up)
- change the code to allow it in any case

But for older releases I'd decide that we don't want to change this through an SRU.
There the solution for users who depend on it to add
 /dev/vhost-net rw,
If existing (>= 18.10)
or otherwise to

Changed in libvirt (Ubuntu Disco):
status: In Progress → Won't Fix
Changed in libvirt (Ubuntu Cosmic):
status: Triaged → Won't Fix
Changed in libvirt (Ubuntu Bionic):
status: Triaged → Won't Fix
tags: added: libvirt-19.10
Dmitrii Shcherbakov (dmitriis) wrote :

I would also consider vhost_scsi and vhost_vsock besides vhost_net.

I think they vhost_scsi might be covered by AppArmorSetSecurityHostLabel adding the rule as needed.
I'm not so sure on vhost_vsock.
Certainly worth to come up with a few tests and ensure that is true for all early/late access cases when implementing this.

I checked vsock devices, those are fully mediated by libvirt and only an already open FD is passed when using those.
Without apparmor allowing a new open to qemu I have:

sudo lsof -p 9445 +fg | grep vhost
qemu-syst 9445 libvirt-qemu 19u CHR RW,LG 10,241 0t0 503 /dev/vhost-vsock

    <vsock model='virtio'>
      <cid auto='yes'/>

So vsock is good as-is

vhost_scsi would be like:

#1 making the module available:
  $ sudo modprobe vhost_scsi
#2 some prework to set things up [1]

My disk:

$ sudo targetcli
targetcli shell version 2.1.fb48
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> backstores/block create name=disk1 dev=/dev/disk/by-path/ccw-0.0.e000-fc-0x50050763060b16b6-lun-0x4024400a00000000
Created block storage object disk1 using /dev/disk/by-path/ccw-0.0.e000-fc-0x50050763060b16b6-lun-0x4024400a00000000.
/> vhost/ create
Created target naa.50014054d8284df8.
Created TPG 1.
/> vhost/naa.50014054d8284df8/tpg1/luns create /backstores/block/disk1
Created LUN 0.

Then in libvirt:
    <hostdev mode='subsystem' type='scsi_host' managed='no'>
      <source protocol='vhost' wwpn='naa.50014054d8284df8'/>

This again is mediated by libvirt and passed as FD
  -device vhost-scsi-ccw,wwpn=naa.50014054d8284df8,vhostfd=28,id=hostdev0,devno=fe.0.0007
  -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27

Works without a rule:
$ sudo lsof -p 14118 +fg | grep vhost
qemu-syst 14118 libvirt-qemu 18u CHR RW,LG 10,241 0t0 503 /dev/vhost-vsock
qemu-syst 14118 libvirt-qemu 27u CHR RW,ND,LG 10,238 0t0 502 /dev/vhost-net
qemu-syst 14118 libvirt-qemu 28u CHR RW,LG 10,47 0t0 586 /dev/vhost-scsi


Since quite often hot/clodplug is different I was looking into that as well:

$ cat vhost-scsi.xml
    <hostdev mode='subsystem' type='scsi_host' managed='no'>
      <source protocol='vhost' wwpn='naa.50014054d8284df8'/>
$ virsh attach-device disco-luks vhost-scsi.xml
error: internal error: cannot update AppArmor profile 'libvirt-0804001f-c45f-4345-994f-9fec048e822e'

$ cat vhost-vsock.xml
    <vsock model='virtio'>
      <cid auto='yes'/>
error: internal error: unable to execute QEMU command 'getfd': No file descriptor supplied via SCM_RIGHTS

Here we go. This time not that it would be added later by virt-aa-helper.
Just the late passing of FDs breaks it.
The fix for vhost-vsock is the same that we do for vhost-net - scsi is different thou.

With vhost fix:
Host: <nothing>
[ 915.674097] crw_info : CRW reports slct=0, oflw=0, chn=1, rsc=3, anc=0, erc=4, rsid=4
[ 915.674230] crw_info : CRW reports slct=0, oflw=0, chn=0, rsc=3, anc=0, erc=4, rsid=0
[ 915.713074] NET: Registered protocol family 40

This has the same "dac would prevent it, so it is sort of ok with the comment" behavior as vhost-net.

This issue is different, and so will be the solution.
I have separated the work on the vhost-scsi hotplug case into bug 1829223

Launchpad Janitor (janitor) wrote :

This bug was fixed in the package libvirt - 5.0.0-1ubuntu4

libvirt (5.0.0-1ubuntu4) eoan; urgency=medium

  * d/p/ubuntu/lp-1825195-*.patch: fix issues with old guests that defined
    the never functional osxsave and ospke features (LP: #1825195).
  * d/p/series: reorder ubuntu Delta
  * d/p/ubuntu-aa/lp-1815910-allow-vhost-net.patch: avoid apparmor issues
    with vhost-net/vhost-vsock/vhost-scsi hotplug (LP: #1815910)
  * d/p/ubuntu-aa/lp-1829223-virt-aa-helper-allow-vhost-scsi.patch fix
    vhost-scsi hotplug in virt-aa-helper (LP: #1829223)

libvirt (5.0.0-1ubuntu3) eoan; urgency=medium

  * SECURITY UPDATE: Add support for md-clear functionality
    - debian/patches/ubuntu/md-clear.patch: Define md-clear CPUID bit in
    - CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091

 -- Christian Ehrhardt <email address hidden> Thu, 16 May 2019 10:42:09 +0200

Changed in libvirt (Ubuntu Eoan):
status: Triaged → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers