apparmor cache not updated when apparmor.d rules change (breaks 15.04/stable -> 15.04/edge updates)
| Affects | Status | Importance | Assigned to | Milestone | |
|---|---|---|---|---|---|
| | Snappy |
Critical
|
Michael Vogt | ||
| | 15.04 |
Critical
|
Michael Vogt | ||
| | apparmor (Ubuntu) |
Undecided
|
Unassigned | ||
Bug Description
The apparmor cache gets confused easily on upgrade.
Here is what happens:
- boot stable, /etc/apparmor.
- upgrade to edge, /etc/apparmor.
- on the next reboot the apparmor_parser compares the mtime of the cache/usr.
-> cache does is *not* re-generate
Possible solution:
- clear cache on upgrade
- make apparmor_parser store mtime of the source file in the header
- make apparmor_parser use set the cache file to the mtime of the source file used to generate the cache and re-generate if those get out-of-sync
Original description:
-------
Rick Spencer ran into the situation that he ended up with a snappy image that gave the following error:
"""
apparmor="DENIED" operation="mkdir" profile=
"""
Running:
$ sudo apparmor_parser --skip-cache -r /etc/apparmor.
fixes it.
This strongly indicates that the cache has the old content and did not get re-generated on upgrade or image build.
I also managed to reproduce this via:
15.04/stable-
The image is here: https:/
Related branches
- Ricardo Salveti (community): Needs Information on 2015-06-04
-
Diff: 157 lines (+78/-6)5 files modifiedhelpers/helpers.go (+17/-0)
helpers/helpers_test.go (+11/-1)
snappy/dirs.go (+4/-0)
snappy/systemimage.go (+16/-5)
snappy/systemimage_test.go (+30/-0)
- Sergio Schvezov (community): Approve on 2015-06-05
- Ubuntu branches: Pending requested 2015-06-05
-
Diff: 94 lines (+59/-0)5 files modifieddebian/changelog (+19/-0)
debian/install (+1/-0)
debian/links (+1/-0)
lib/systemd/system/snappy-workaround-apparmor.service (+11/-0)
usr/bin/snappy-apparmor-lp1460152 (+27/-0)
| description: | updated |
| Ricardo Salveti (rsalveti) wrote : | #1 |
| description: | updated |
| description: | updated |
| description: | updated |
| summary: |
- (sometimes?) becomes confused about apparmor rules for ubuntu-core- - launcher + apparmor cache not updated when apparmor.d rules change |
| Changed in snappy: | |
| importance: | Undecided → Critical |
| summary: |
- apparmor cache not updated when apparmor.d rules change + apparmor cache not updated when apparmor.d rules change (breaks + 15.04/stable -> 15.04/edge updates) |
| Changed in snappy: | |
| status: | New → In Progress |
| Michael Vogt (mvo) wrote : | #2 |
This should be fixed with:
http://
I do however wonder if apparmor_parser should set the mtime of the cache file to the mtime of the original file. The current behavior will also break e.g. restore of backups to /etc/apparmor.d/
Assume that:
- /etc/apparmor.
- admin edits usr.bin.foo and breaks it
- admin re-boots
- admin runs /usr/bin/foo and it fails
- admin restores /etc/apparmor.
-> now the cache never gets updated and /usr/bin/foo is broken because the cache is newer than the mtime of the etc/apparmor.
One fix would be to use the same mtime in the cache as in the original and re-generate the cache if the mtimes differ. Or just encode the mtime of the original in the cache data structure.
| Michael Vogt (mvo) wrote : | #3 |
Fix for livecd-rootfs needs to get backported to vivid of course too.
| Michael Vogt (mvo) wrote : | #4 |
One thing we need to make sure here is that on systems that are already broken they will "heal" themself.
| Michael Vogt (mvo) wrote : | #5 |
It turns out that my approach might be too simplistic, because:
- the apparmor_profile cache is kernel version specific so I need to verify that there is no problem here (and/or if the cache is auto-regenerated)
| Michael Vogt (mvo) wrote : | #6 |
I looked into this some more as I was confused why this works on the distro. And it turns out that the dh_apparmor cache re-generates the cache on install time.
I would really prefer if apparmor could handle this differently, I attach a (ugly) proof of concept patch with what I have in mind. My idea is to sync the mtime of cache and profile to ensure its always re-generated when they are out-of-sync. Ideally this would be part of the apparmor cache header I think.
| description: | updated |
| Michael Vogt (mvo) wrote : | #7 |
This should be fixed with image r76, the cache files are generated on the server now just like touch is doing it.
| tags: | added: patch |
| Michael Vogt (mvo) wrote : | #8 |
Ricardo pointed out that we need to consider the features file (just like touch).
| Changed in snappy: | |
| assignee: | nobody → Michael Vogt (mvo) |
| Michael Vogt (mvo) wrote : | #9 |
I added a different approach that adds hashes next to the cached files so that we can compare if hash(profile) == hash(cache) and if not re-generate.
| Jamie Strandboge (jdstrand) wrote : | #10 |
FYI, the hash approach is slow for the normal case since we always have to perform an sum. Furthermore it doesn't take into account #include'd files that might also change (eg, apparmor is updated and has a different base abstraction). For the workaround, I guess it is ok since the slowdown will only be for a couple of profiles but I would have rather seen us unconditionally invalidating the cache when switching from a to b or vice versa.
| John Johansen (jjohansen) wrote : | #11 |
Yes the apparmor_parser should set the mtime of the cache file to be the most recent mtime timestamp of the set of policy files that resulted in the cache files creation. This is something we have been meaning to do for a long time but just never gotten around to it because there always something more important.
I will come up with a patch today
| Michael Vogt (mvo) wrote : | #12 |
@all I just verified that a 15.04/stable -> 15.04/edge upgrade works and that the caches are regenerated. So the workaround works.
@John I started with the mtime approach in my proof of concept patch. So if you guys are too busy I can try to expand it to cover the includes as well (it does not right now). Great to hear that we are close to nice and clean solution :)
@Jamie Thanks for your feedback! I haven't considered the #includes, thats a gap in the patch indeed.
| John Johansen (jjohansen) wrote : | #13 |
Michael,
I have a patch (well two actually), and they just need further review and testing. I also have a partial hashing patch that if needed could be finished in a few hours, and add native hashing (if we go this route we could make the hash selectable, so something fast like lookup3 could be selected for a given platform).
| John Johansen (jjohansen) wrote : | #14 |
second patch
| Michael Vogt (mvo) wrote : | #15 |
@John Yay! The patches look great, thanks a lot! I leave the decision on hashing vs mtime to you/the security team. For me the mtime approach is good enough (unless I miss some failure case that is relatively easy to trigger, it seems it covers all but the most pathological cases) and it will solve this bug in a nice and clean way.
| Michael Vogt (mvo) wrote : | #16 |
@John Yay! The patches look great, thanks a lot! I leave the decision on hashing vs mtime to you/the security team. For me the mtime approach is good enough (unless I miss some failure case that is relatively easy to trigger, it seems it covers all but the most pathological cases) and it will solve this bug in a nice and clean way.
| Jamie Strandboge (jdstrand) wrote : | #17 |
This is fine for wily. We'll want to backport this to other releases, but we'll need to be careful wrt 15.04 because touch is about to release their 15.04-based OTA and if we push this to vivid-updates, then it will trigger a policy recompile on touch. As such, I think for now we should either:
1. update the snappy image build ppa with this fix, or
2. push this as SRU to 15.04 and update the stable-
Since only snappy is known to need this right now, I think the former is the way to go unless we get reports that the distro needs this SRU'd to 15.04, at which point we should do '2'.
| Michael Vogt (mvo) wrote : | #18 |
I'm in favour of (1) too but lets wait until the snappy point release is done. I add a trello card so that its not forgotten.
| Ricardo Salveti (rsalveti) wrote : | #19 |
Let's land on wily, test and then make push to our PPA (so we can also test it there, and also revert the workaround), we can include this at our next stable release :-)
| Michael Vogt (mvo) wrote : | #20 |
I looked into backporting this, but it seems to be not entirely straightforward as the code layout changed and the changed file are not available in 2.9 it seems. So this needs some work beyond just applying the patch.
| John Johansen (jjohansen) wrote : | #21 |
sorry, yes. I have been poking at what is the best/minimum backport of this
| John Johansen (jjohansen) wrote : | #22 |
Tentative backport of patch for 2.9 (note it only needs a single patch)
| Changed in snappy: | |
| status: | In Progress → Fix Committed |
| Changed in snappy: | |
| status: | Fix Committed → Fix Released |
| Changed in apparmor (Ubuntu): | |
| status: | New → Fix Released |


<jdstrand> rsalveti: I'm only putting this out there as an option that you are free to ignore-- there are only 3 system profiles on core. pregenerated them doesn't buy us much on first boot-- about 2.2 seconds on bbb d/cache/ * d/cache is in /etc/system- image/writable- paths d/cache/ *
<jdstrand> rsalveti: maybe on upgrade hook could simply rm -f /etc/apparmor.
<jdstrand> rsalveti: could the a/b partitioning be getting in the way?
<jdstrand> rsalveti: ie, are the writable bind mounted areas a/b'd as well?
<rsalveti> that's an interesting question
* ogra_ thought not
<ogra_> we only have one writable and a and b
<jdstrand> rsalveti: eg, if a has old cache and old profile and we reboot into b with new profile, do we get a's old cache file?
<rsalveti> yeah, are we sharing the same writable path for the cache?
<rsalveti> if so, then, hmm
<ogra_> most likely
<rsalveti> not good
<ogra_> since we only have one writable partition
<rsalveti> ogra_: can you confirm?
<ogra_> yeah, i think i can
<ogra_> three partitions ... two readonly, one writable ...
<ogra_> writable gets mounted in initrd by label, no matter what readonly part is active
<ogra_> and /etc/apparmor.
<ogra_> i guess we would want an a/b scheme there
<ogra_> in a subdir or some such
<ogra_> or via a bind mount that hides the real path
<jdstrand> ok, so I'm much more convinced that for now, we rm -f /etc/apparmor.
<jdstrand> cause the alternative would be too risky for sru
<jdstrand> we need to implement the alternative for touch anyway
<jdstrand> s/touch/personal/
<rsalveti> ogra_: jdstrand: yeah, let's just do this
<rsalveti> sharing same writable path can be indeed dangerous
<rsalveti> it's usually desirable
<rsalveti> but we need to be careful