As I discussed with Nish (thanks for the update here) we might need to extend the apparmor profile - but we should understand the case better before doing so. I use KVM guests on systems with DPDK hugepages as well without issues. And I use huge pages for them so something has to be special in the charms configuring them here. If you enable hugepages for DPDK it will make sure there are hugepage mounts for each kind of page sizes. So far we haven't had an issue with it but it seems if on the same system you run qemu with huge pages this issue occurs. Usually I set a 1 in /etc/default/qemu-kvm: 9 # Set this to 1 if you want hugepages to be available to kvm under 10 # /run/hugepages/kvm 11 KVM_HUGEPAGES=0 And my guests have: This setup is working fine so far, but especially the explicit page size is a consequence of the support for multiple page sizes by libvirt/qemu and might be missing in the charm/openstack so far. If the option KVM_HUGEPAGES is set qemu-kvm ensures there is a mountpoint (of the default huge page size) in /run/hugepages/kvm mkdir -p /run/hugepages/kvm mount -t hugetlbfs hugetlbfs-kvm -o mode=775,gid=kvm /run/hugepages/kvm BTW "owner "/run/hugepages/kvm/libvirt/qemu/**" rw," is also in the apparmor profile. But libvirt without explicit config will pick on hugepages from /proc/mounts. But in some sense that means would need to include any target dir hugepages will ever be mounted to. Of course we will might add the wildcard for the DPDK paths, but still that leaves the issue open for any other path added by e.g. an Database admin for Hugepages to his DB. In fact there is a config that can set this as needed in /etc/libvirt/qemu.conf Quoting from that config: If provided by the host and a hugetlbfs mount point is configured, a guest may request huge page backing. When this mount point is unspecified here, determination of a host mount point in /proc/mounts will be attempted. Specifying an explicit mount overrides detection of the same in /proc/mounts. Setting the mount point to "" will disable guest hugepage backing. If desired, multiple mount points can be specified at once, separated by comma and enclosed in square brackets, for example: [...] It might be worth to set something in there as a workaround to disable the suboptimal detection. $ echo 'hugetlbfs_mount = ["/run/hugepages/kvm"]' >> /etc/libvirt/qemu.conf Essentially qemu only "does what it is told" like in -object memory-backend-file,id=mem1,size=1G,mem-path=/mnt/hugepages-1G \ -device pc-dimm,id=dimm1,memdev=mem1 \ So it might be interesting to see what commandline libvirt creates. Questions: - Was there any chance to test the workaround Seth suggested? - Thiago, could you attach a copy of the target systems /etc/default/qemu-kvm and the created guest XML that openstack is pushing? - Also in your host could you do "mount | grep huge" and report that as well, maybe the order is important. - can you report the qemu commandline that is constructed by libvirt from /var/log/libvier/qemu/.log? - Could you try the workaround to disable libvirts detection by setting hugetlbfs_mount in /etc/libvirt/qemu.conf Based on that feedback we can decide if we "just" want to widen the apparmor profile, or to modify the charms or if we need more than that.