/proc/PID/uid_map and /proc/PID/gid_map would both be parsed.
They are made of mappings that look like:
- container id
- host id
- count
By parsing each one, we can effectively build a list of host id ranges that are mapped in the container. The fact that it's mapped into the container means that uid 0 in that container has control over the id and it's therefore fine to forward a crash for it.
We'd then check that the task's executable is owned by both uid and gid in those ranges and only forward in such cases.
So now if you're a random user on the system, you can't just unshare a user namespace and map your own uid/gid to 0 in it, followed by setting up a unix socket and executing a setuid binary.
When that setuid binary crashes, apport will notice that the crashed task's uid/gid map only included a single id, the user's own and that the binary which crashed isn't owned by that id, therefore skipping forwarding.
/proc/PID/uid_map and /proc/PID/gid_map would both be parsed.
They are made of mappings that look like:
- container id
- host id
- count
By parsing each one, we can effectively build a list of host id ranges that are mapped in the container. The fact that it's mapped into the container means that uid 0 in that container has control over the id and it's therefore fine to forward a crash for it.
We'd then check that the task's executable is owned by both uid and gid in those ranges and only forward in such cases.
So now if you're a random user on the system, you can't just unshare a user namespace and map your own uid/gid to 0 in it, followed by setting up a unix socket and executing a setuid binary.
When that setuid binary crashes, apport will notice that the crashed task's uid/gid map only included a single id, the user's own and that the binary which crashed isn't owned by that id, therefore skipping forwarding.