esm-cache.service denied access to /etc/os-release by apparmor
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
ubuntu-advantage-tools (Ubuntu) | Status tracked in Oracular | |||||
Xenial |
Fix Committed
|
Undecided
|
Unassigned | |||
Bionic |
Fix Committed
|
Undecided
|
Unassigned | |||
Focal |
Fix Committed
|
Undecided
|
Unassigned | |||
Jammy |
Fix Committed
|
Undecided
|
Unassigned | |||
Mantic |
Fix Committed
|
Undecided
|
Unassigned | |||
Noble |
Fix Committed
|
Undecided
|
Unassigned | |||
Oracular |
Fix Released
|
High
|
Andreas Hasenack |
Bug Description
[ Impact ]
On systems where /etc/os-release is an actual file instead of a symlink to /usr/lib/
This results in the esm-cache.service failing to run:
May 13 19:17:29 j-uat-2065573 python3[3490]: ["2024-
[ Test Plan ]
Keep sudo dmesg -wT | grep ubuntu_pro running in a terminal (in the same VM, if testing in a VM, or in the host, if testing with a LXD container), and then run this on the system being tested (LXD or VM):
sudo rm /etc/os-release
sudo cp /usr/lib/os-release /etc
sudo rm -rf /var/lib/
sudo systemctl start esm-cache.service
there should be no apparmor DENIED messages for an access to /etc/os-release in the dmesg output. Additionally, /var/log/
Additionally, for a more surgical test, also run these:
sudo rm /etc/os-release
sudo cp /usr/lib/os-release /etc
sudo aa-exec -p ubuntu_
On a system with the fixed apparmor profile, you should see the contents of /etc/os-release. With the bug, the last command above will return a permission denied error and dmesg will show a corresponding apparmor DENIED error.
[ Where problems could occur ]
The fix is to include a rule to allow access to /etc/os-release, and /usr/lib/os-release too (even though that was covered already via other apparmor abstractions being included).
We don't think there is an additional security risk by this new allow rule, and in fact, it should probably be covered by some base abstraction in the future.
The risk being introduced by this fix is a syntax error on the profile, but that is covered by the package build which runs a syntax check.
The other riks is that this rule could only be correct for certain ubuntu releases, and not older ones like xenial, but this is a very simple file access rule, which is something very old apparmor profiles understand already.
[ Other Info ]
This was found by the CI system of a contributor who happened to be including proposed packages in their testing, and that for some reason does not have /etc/os-release as a symlink. We are unsure why /etc/os-release is not a symlink, but nevertheless it's a valid scenario, and should be fixed in the apparmor profile.
[ Original Description ]
We just caught a regression in our CI: https:/
An unexpected apparmor denial is logged in the journal:
May 13 08:49:01 ubuntu systemd[1]: Starting Update APT News...
May 13 08:49:01 ubuntu systemd[1]: Starting Update the local ESM caches...
May 13 08:49:02 ubuntu PackageKit[2370]: refresh-cache transaction /17_aebebede from uid 0 finished with success after 384ms
May 13 08:49:02 ubuntu audit[2667]: AVC apparmor="DENIED" operation="open" profile=
May 13 08:49:02 ubuntu kernel: kauditd_printk_skb: 59 callbacks suppressed
May 13 08:49:02 ubuntu kernel: audit: type=1400 audit(171559014
May 13 08:49:02 ubuntu python3[2667]: ["2024-
May 13 08:49:02 ubuntu systemd[1]: esm-cache.service: Deactivated successfully.
May 13 08:49:02 ubuntu systemd[1]: Finished Update the local ESM caches.
May 13 08:49:02 ubuntu systemd[1]: apt-news.service: Deactivated successfully.
May 13 08:49:02 ubuntu systemd[1]: Finished Update APT News.
The relevant change since the last (working) state is that these packages got updated:
ubuntu-
ubuntu-pro-client (31.2.3~22.04 -> 32~22.04)
ubuntu-
description: | updated |
Hi, thanks for catching this, and for testing the proposed version of ubuntu- advantage- tools (v32).
We are still a bit baffled by how this escaped our CI, and to be honest, haven't yet been able to reproduce the apparmor DENIED message. Looking at the apparmor profiles involved, we don't see a rule allowing /etc/os-release to be read, Yet it doesn't happen in a jammy test installation, and so far we can't explain why.
Looking at https:/ /cockpit- logs.us- east-1. linodeobjects. com/image- refresh- ubuntu- 2204-6e3c7232- 20240512- 223711/ log.html, looks like you have jammy-proposed enabled at large, and grabbing everything from there, if I understood that correctly. I'll try to reproduce it that way.