The following will reproduce the issue in a disco VM with disco LXD container: Initial setup: 1. have an up to date disco vm $ cat /proc/version_signature Ubuntu 5.0.0-11.12-generic 5.0.6 2. sudo snap install lxd 3. sudo adduser `id -un` lxd 4. newgrp lxd 5. sudo lxd init # use defaults 6. . /etc/profile.d/apps-bin-path.sh After this note the SFS_MOUNTPOINT bug: 1. lxc launch ubuntu-daily:d d-testapparmor 2. lxc exec d-testapparmor /lib/apparmor/apparmor.systemd reload 3. fix /lib/apparmor/rc.apparmor.functions to define SFS_MOUNTPOINT="${SECURITYFS}/${MODULE}" at the top of is_container_with_internal_policy(). Ie lxc exec d-testapparmor vi /lib/apparmor/rc.apparmor.functions 4. lxc exec d-testapparmor -- sh -x /lib/apparmor/apparmor.systemd reload # notice apparmor_parser was called At this point, these were called (as seen from the sh -x output, above): /sbin/apparmor_parser --write-cache --replace -- /etc/apparmor.d /sbin/apparmor_parser --write-cache --replace -- /var/lib/snapd/apparmor/profiles but no profiles were loaded: $ lxc exec d-testapparmor aa-status Note weird parser error trying to load an individual profile: $ lxc exec d-testapparmor -- apparmor_parser -r /etc/apparmor.d/sbin.dhclient AppArmor parser error for /etc/apparmor.d/sbin.dhclient in /etc/apparmor.d/tunables/home at line 25: Could not process include directory '/etc/apparmor.d/tunables/home.d' in 'tunables/home.d' Stopping and starting the container doesn't help: $ lxc stop d-testapparmor $ lxc start d-testapparmor $ lxc exec d-testapparmor aa-status apparmor module is loaded. 0 profiles are loaded. 0 profiles are in enforce mode. 0 profiles are in complain mode. 0 processes have profiles defined. 0 processes are in enforce mode. 0 processes are in complain mode. 0 processes are unconfined but have a profile defined. Note, under 5.0.0-8.9 and with the SFS_MOUNTPOINT fix, the tunables error goes away: $ lxc exec d-testapparmor -- apparmor_parser -r /etc/apparmor.d/sbin.dhclient $ and the profiles load on container start: $ lxc exec d-testapparmor aa-status apparmor module is loaded. 27 profiles are loaded. 27 profiles are in enforce mode. /sbin/dhclient /snap/core/6673/usr/lib/snapd/snap-confine /snap/core/6673/usr/lib/snapd/snap-confine//mount-namespace-capture-helper /usr/bin/man /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/NetworkManager/nm-dhcp-helper /usr/lib/connman/scripts/dhclient-script /usr/lib/snapd/snap-confine /usr/lib/snapd/snap-confine//mount-namespace-capture-helper /usr/sbin/tcpdump man_filter man_groff nvidia_modprobe nvidia_modprobe//kmod snap-update-ns.core snap-update-ns.lxd snap.core.hook.configure snap.lxd.activate snap.lxd.benchmark snap.lxd.buginfo snap.lxd.check-kernel snap.lxd.daemon snap.lxd.hook.configure snap.lxd.hook.install snap.lxd.lxc snap.lxd.lxd snap.lxd.migrate 0 profiles are in complain mode. 0 processes have profiles defined. 0 processes are in enforce mode. 0 processes are in complain mode. 0 processes are unconfined but have a profile defined. However, 5.0.0-11.12 has fixes for lxd and apparmor. This 11.12 also starts using shiftfs. Very interestingly, if I create a container under 5.0.0-8.9, do the SFS_MOUNTPOINT fix and start it under 5.0.0-11.12, then policy loads and everything seems fine; there are no shiftfs mounts for that container: $ lxc exec d-testapparmor -- grep shiftfs /proc/self/mountinfo $ *but* if I create the container under 11.12, I see the problems and there are shiftfs mounts: $ lxc exec shiftfs-testapparmor -- grep shiftfs /proc/self/mountinfo 1042 443 0:78 / / rw,relatime - shiftfs /var/snap/lxd/common/lxd/storage-pools/default/containers/shiftfs-testapparmor/rootfs rw,passthrough=3 1067 1043 0:57 /shiftfs-testapparmor /dev/.lxd-mounts rw,relatime master:216 - tmpfs tmpfs rw,size=100k,mode=711 1514 1042 0:78 /snap /snap rw,relatime shared:626 - shiftfs /var/snap/lxd/common/lxd/storage-pools/default/containers/shiftfs-testapparmor/rootfs rw,passthrough=3