AppArmor denies crun sending signals to containers (stop, kill)

Bug #2040483 reported by Martin Pitt
86
This bug affects 13 people
Affects Status Importance Assigned to Milestone
golang-github-containers-common (Ubuntu)
Status tracked in Oracular
Oracular
Fix Released
Undecided
Unassigned
libpod (Ubuntu)
Status tracked in Oracular
Mantic
Confirmed
Undecided
Unassigned
Noble
Confirmed
Undecided
Unassigned
Oracular
Fix Released
Undecided
Unassigned

Bug Description

[ Impact ]

 * On mantic and noble, when run as root, podman cannot stop any container running in background because crun is being run with a new profile introduced in AppArmor v4.0.0 that doesn't have corresponding signal receive rule container's profile.

 * Without the fix, users would have to resort to figuring out container's PID 1 and killing it as root or by other privileged and unconfined process. This is a regression from basic podman functionality.

 * The fix adds signal receive rules for currently confined OCI runtimes in AppArmor v4.0.0 (runc and crun) to the profile used by podman.

[ Test Plan ]

All commands must be invoked as root.

Run tests below with both crun and runc OCI runtimes. For crun, nothing has to be changed (it's installed and used by default). For runc, first install the runc pakcage, and then insert "--runtime /usr/sbin/runc" arguments after "podman run".

Start container in background and then stop it:

  # Run container in background (-d)
  podman run -d --name foo docker.io/library/nginx:latest
  # Stop the container
  podman stop foo

On success, the last command should print the container name and the container running in background should be stopped (verify with "podman ps").

Additional tests:

Verify that container running in foreground TTY can be stopped.

  # Terminal 1:
  # Run container on this TTY
  podman run -it --name bar --rm docker.io/library/ubuntu:22.04

  # Terminal 2:
  # Stop the container
  podman stop bar

On success, the last command should print the container name, the process running in terminal 1 should stop, and the container should be removed (verify with "podman ps -a").

Verify that container running with dumb init can be killed.

  # Run container in background (-d) with dumb init
  podman run -d --name bar --rm --init ubuntu:22.04 sleep infinity
  # Stop the container
  podman stop bar

On success, the last command should print the container name and the container running in background should be stopped and removed (verify with "podman ps -a").

Verify container processes can signal each other

  # Run container in foreground with processes sending signals between themselves
  podman run ubuntu:22.04 sh -c 'sleep inf & sleep 1 ; kill $!'

On success, the last command should exit after cca 1 second with exit status 0.

[ Where problems could occur ]

 * The fix requires a rebuild of podman that will pull in any other changes in the archive since the last build, which could potentially break some functionality.

[ Original report ]

Mantic's system podman containers are completely broken due to bug 2040082. However, after fixing that (rebuilding with the patch, or a *shht don't try this at home* hack [1]), the AppArmor policy still causes bugs:

  podman run -it --rm docker.io/busybox

Then

  podman stop -l

fails with

   2023-10-25T11:06:33.873998Z: send signal to pidfd: Permission denied

and journal shows

  audit: type=1400 audit(1698231993.870:92): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.50.1" pid=4713 comm="3" requested_mask="receive" denied_mask="receive" signal=term peer="/usr/bin/crun"

This leaves the container in a broken state:

  # podman ps -a
  CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  61749260f9c4 docker.io/library/busybox:latest sh 40 seconds ago Exited (-1) 29 seconds ago confident_bouman

  # podman rm --all
  2023-10-25T11:07:21.428701Z: send signal to pidfd: Permission denied
  Error: cleaning up container 61749260f9c4c96a51dc27fdd9cb8a86d80e4f2aa14eb7ed5b271791ff8008ae: removing container 61749260f9c4c96a51dc27fdd9cb8a86d80e4f2aa14eb7ed5b271791ff8008ae from runtime: `/usr/bin/crun delete --force 61749260f9c4c96a51dc27fdd9cb8a86d80e4f2aa14eb7ed5b271791ff8008ae` failed: exit status 1

  audit: type=1400 audit(1698232041.422:93): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.50.1" pid=4839 comm="3" requested_mask="receive" denied_mask="receive" signal=kill peer="/usr/bin/crun"

[1] sed -i 's/~alpha2/0000000/' /usr/sbin/apparmor_parser

Ubuntu 23.10

ii apparmor 4.0.0~alpha2-0ubuntu5 amd64 user-space parser utility for AppArmor
ii golang-github-containers-common 0.50.1+ds1-4 all Common files for github.com/containers repositories
ii podman 4.3.1+ds1-8 amd64 engine to run OCI-based containers in Pods

Related branches

Revision history for this message
Martin Pitt (pitti) wrote :

FTR, after calling `aa-teardown` the stopping and removing works.

tags: added: mantic regression-release
description: updated
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in libpod (Ubuntu):
status: New → Confirmed
Revision history for this message
Martin Pitt (pitti) wrote :

Confirmed on current noble, even with the -proposed build https://launchpad.net/ubuntu/+source/libpod/4.7.2+ds1-2build1/+build/27032773

Revision history for this message
Martin Pitt (pitti) wrote :

I tried a more targeted workaround, with

  aa-complain /etc/apparmor.d/usr.bin.crun

or alternatively (without apparmor-utils, which isn't on the default cloud image):

  sed -i '/flags=/ s/unconfined/complain/' /etc/apparmor.d/usr.bin.crun

but for some reason that breaks podman entirely:

# podman run -it --rm docker.io/busybox
Failed to re-execute libcrun via memory file descriptor
                                                       ERRO[0000] Removing container 7c3c938f8e356a9834de6a114ad8b8353ffac7508c8aac131d588e1358ba2f30 from runtime after creation failed
Error: OCI runtime error: crun: Failed to re-execute libcrun via memory file descriptor

I just noticed that neither podman nor crun ship their own AppArmor profiles, /etc/apparmor.d/usr.bin.crun is shipped by apparmor. So adding a package task, but leaving libpod as "affected", so that it is easier to find.

Revision history for this message
Martin Pitt (pitti) wrote :

I also tried

  aa-disable usr.bin.crun

but that doesn't work either. I guess it's not really crun, but profile="containers-default-0.50.1", but that is created dynamically -- it's not anywhere in /etc/apparmor.d/. I grepped the whole file system for that:

  grep: /usr/lib/podman/rootlessport: binary file matches
  grep: /usr/bin/podman: binary file matches
  grep: /usr/bin/buildah: binary file matches

Running an individual container with --security-opt=label=disable also doesn't work, same DENIED and failure.

"man containers.conf" points at apparmor_profile="container‐default", but not how to disable it. I naively tried apparmor_profile="none" but

  Error: AppArmor profile "none" specified but not loaded

But curiously an empty string works. 🎉 So, my official workaround:

  mkdir -p /etc/containers/containers.conf.d
  printf '[CONTAINERS]\napparmor_profile=""\n' > /etc/containers/containers.conf.d/disable-apparmor.conf

no longer affects: apparmor (Ubuntu)
no longer affects: apparmor (Ubuntu Mantic)
no longer affects: apparmor (Ubuntu Noble)
Revision history for this message
Georgia Garcia (georgiag) wrote (last edit ):

According to the AppArmor policy [1], the following rule is allowed

signal (receive) peer=unconfined,

And when there was no policy for /usr/bin/crun, the signal that is now being denied would fall under this rule, because crun was unconfined.
A profile for crun was added in Bug 2035315 because applications that make use of unprivileged user namespaces must be confined by an AppArmor profile, so to properly fix this bug the following rule must be added

signal (receive) peer=/usr/bin/crun,

or better yet, because of the AppArmor upstream commit that renames the profile [2]

signal (receive) peer={/usr/bin/,}crun,

[1] https://github.com/containers/common/blob/main/pkg/apparmor/apparmor_linux_template.go#L23C36-L23C36
[2] https://gitlab.com/apparmor/apparmor/-/blob/2594d936ada5df797bc69e78a2ef8c6e6171d454/profiles/apparmor.d/crun

Revision history for this message
André Oliveira (oribunokiyuusou) wrote (last edit ):

I too ran into this issue, but `aa-disable usr.bin.crun` was enough to sort it out.

EDIT: aa-disable allows me to run the crun command manually, but when podman runs it it still fails.

Revision history for this message
André Oliveira (oribunokiyuusou) wrote :

This is definitely happening on Mantic, as that's what I'm running. Would be a shame to see this make it's way into Noble as podman gradually replaces docker.

Changed in libpod (Ubuntu Mantic):
status: New → Confirmed
Martin Pitt (pitti)
tags: added: cockpit-test
Revision history for this message
Tomáš Virtus (virtustom) wrote :

There's a similar issue with runc (and containerd and docker) reported here https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2039294

I've opened PRs with a fix upstream:
- https://github.com/containerd/containerd/pull/10123
- https://github.com/moby/moby/pull/47749

I think I'll need to work a little bit more on them to dynamically add rules only for profiles that exist on the system, even though it works even if they don't exist. Is this a proper way to fix it? I have gained all my experience with AppArmor in last 2 days.

For podman a similar change should be applied to the profile template defined here https://github.com/containers/common/blob/main/pkg/apparmor/apparmor_linux_template.go. I can do that later.

description: updated
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in golang-github-containers-common (Ubuntu):
status: New → Confirmed
description: updated
description: updated
Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

Hi Tomáš,

Thanks for investigating this issue and providing the patch (MP) to fix it in Noble. However, before fixing it in Noble, we need to fix it in Oracular (development release). Would you like to provide a patch or MP targeting Oracular?

Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

I also see that you are patching golang-github-containers-common. Does that mean that no patch in libpod is needed? If the answer is yes, we need to mark the libpod tasks as Invalid.

Revision history for this message
Neil Wilson (neil-aldur) wrote :

To move this on a bit more rapidly as it is a blocking issue for me.

It's the same version in Oracular at present. I've pushed the changes as an MP against ubuntu/devel.

What needs to happen next?

Revision history for this message
Neil Wilson (neil-aldur) wrote :

The patch above doesn't work as it stands. We are still getting signal filters in the audit log

May 14 11:13:06 srv-omzr6 kernel: audit: type=1400 audit(1715685186.296:112): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=8031 comm="3" requested_mask="receive" denied_mask="receive" signal=term peer="crun"
May 14 11:13:06 srv-omzr6 kernel: audit: type=1400 audit(1715685186.318:113): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=8033 comm="3" requested_mask="receive" denied_mask="receive" signal=term peer="crun"
May 14 11:13:16 srv-omzr6 kernel: audit: type=1400 audit(1715685196.340:114): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=8035 comm="3" requested_mask="receive" denied_mask="receive" signal=kill peer="crun"
May 14 11:13:21 srv-omzr6 kernel: audit: type=1400 audit(1715685201.413:115): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=7664 comm="conmon" requested_mask="receive" denied_mask="receive" signal=term peer="podman"
May 14 11:14:31 srv-omzr6 kernel: audit: type=1400 audit(1715685271.577:116): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=8049 comm="3" requested_mask="receive" denied_mask="receive" signal=term peer="crun"
May 14 11:14:36 srv-omzr6 kernel: audit: type=1400 audit(1715685276.326:117): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=8052 comm="3" requested_mask="receive" denied_mask="receive" signal=kill peer="crun"
May 14 11:14:41 srv-omzr6 kernel: audit: type=1400 audit(1715685281.392:118): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=7458 comm="conmon" requested_mask="receive" denied_mask="receive" signal=term peer="podman"
May 14 11:14:41 srv-omzr6 kernel: audit: type=1400 audit(1715685281.604:119): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=8055 comm="3" requested_mask="receive" denied_mask="receive" signal=kill peer="crun"

Revision history for this message
Tomáš Virtus (virtustom) wrote :

@lucaskanashiro: This patch is for golang-github-containers-common source package. This source package produces golang-github-containers-common-dev binary package, which is just source code on filesystem. But podman binary package, which is produced from libpod source package, has golang-github-containers-common-dev in its Build-Depends. It doesn't depend on golang-github-containers-common-dev during runtime. So when golang-github-containers-common is updated, libpod should be rebuilt. Should I keep libpod as confirmed, and make a patch against libpod to make a rebuild?

@neil-aldur: I'll try to reproduce.

Revision history for this message
Neil Wilson (neil-aldur) wrote :

The debdiff I've put together for oracular updates the patch to be a bit more general and cover all the signals I've seen so far in testing. (As well as dropping the other patch that has been incorporated upstream).

  # Allow certain signals from OCI runtimes (podman, runc and crun)
    signal (receive) set=(int, quit, kill, term) peer={/usr/bin/,/usr/sbin/,}runc,
    signal (receive) set=(int, quit, kill, term) peer={/usr/bin/,/usr/sbin/,}crun,
    signal (receive) set=(int, quit, kill, term) peer={/usr/bin/,/usr/sbin/,}podman,

Upstream have said they have no apparmor experience, so I suspect they will take a PR. See https://github.com/containers/common/issues/1898

Revision history for this message
Neil Wilson (neil-aldur) wrote :

I've built a backported 4.9.4 libpod for noble based on an updated golang-github-containers-common including the above patch.

It's available from ppa:brightbox/experimental

Revision history for this message
Neil Wilson (neil-aldur) wrote :

Adding the podman signal line, and building a libpod that overrides the default packages eliminates the errors I was getting.

All the tests in this ticket pass with the updated packages.

Revision history for this message
Tomáš Virtus (virtustom) wrote (last edit ):

@neil-aldur, did you forget to attach the debdiff?

By restricting the signal set you also restrict what $SIG you can put to "podman kill --signal $SIG".

I did not realize that there's a podman reference profile as well, but since podman doesn't try to kill the container by itself, I wonder if it makes sense to arbitrarily open a policy like this.

Also, whether you changes are good or not, they diverge from the policy changes we have already merged to containerd and moby upstream. Not sure if that's a problem.

Regarding your changes to the changelog entry in your MP: I based my entry on comment on a code comment from ahasenack (https://code.launchpad.net/~fun2program8/ubuntu/+source/crun/+git/crun/+merge/464233, you have to select b879 commit, it's the first code comment). I don't think we should copy the commit message into changelog entries. It's already in the patch.

Revision history for this message
Tomáš Virtus (virtustom) wrote :

Also, thanks for linking the podman issue. I'll try to merge patch upstream similar to moby and containerd.

Revision history for this message
Neil Wilson (neil-aldur) wrote :

The debdiff is in the MP above.

Podman does try to kill the container itself, as the error trace above testifies.

May 14 11:14:41 srv-omzr6 kernel: audit: type=1400 audit(1715685281.392:118): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=7458 comm="conmon" requested_mask="receive" denied_mask="receive" signal=term peer="podman"

It's trying to kill conmon in some scenarios, which means your policy changes so far are deficient in that regard. We can tighten the signal set there to term and kill, which is certainly no worse than the pre-4.0.0 situation.

I note the point about the signal set on the runtimes, and that should be removed. The stop signals can be set to anything within the container.

I would suggest extending the AARE to cover the binaries as well as the policy name.

Revision history for this message
Tomáš Virtus (virtustom) wrote (last edit ):

Sorry, I missed the conmon-podman denial. Would you mind making a PR to the upstream with your changes with issue you posted linked? I think Lucas will not have time until end of week.

Revision history for this message
Neil Wilson (neil-aldur) wrote :

I've pushed the changes based on your comments to the MP above. I've left the signal set for podman as (int, quit, term, kill).

Do you think that signal set should be tighter, or is that a good compromise?

If that seems ok with you, I'll happily handle the PR upstream at GitHub.

Revision history for this message
Tomáš Virtus (virtustom) wrote :

Thanks Neil, I'll let you handle the upstream. I think what you have in the MP is fine.

Revision history for this message
Neil Wilson (neil-aldur) wrote :
Revision history for this message
Neil Wilson (neil-aldur) wrote :

PR accepted upstream. I've backported the patch the oracular MP above.

What needs to be done now to get this into an SRU for noble?

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

I'm going over this bug in my patch pilot shift, trying to understand all the back and forth that happened.

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

I tested the fix, it's actually ready to sponsor, but I don't know how real deployments out there could use this fix once it's available.

The profile is not a file on disk, it's inside the podman binary. There is nothing reloading the fixed profile when the package is upgraded or installed, it's only loaded when a container is created. But if I start a new container with the new podman, it also won't load the new fixed apparmor profile: it will use the one that is already loaded into the kernel (the wrong one).

Without resorting to a reboot, how would one apply this fix to a live system?

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

This worked to remove the profile:

  # echo -n containers-default-0.57.4 > /sys/kernel/security/apparmor/.remove

Then of course all running podman containers become unconfined. You can at least stop them, and any new ones you start from now on, the first one will trigger the load of the updated apparmor profile.

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

Preempting an SRU analysis of this bug, for noble, I would ask for more clarification:

- make it clearer that while bin:podman has the apparmor profile bits that need fixing, they come from src:golang-github-containers-common. In other words, both packages need to be SRUed, and src:golang-github-containers-common needs to be updated first, land in proposed, and then src:libpod can be rebuilt
- just upgrading the bin:podman package with the fix is not enough: it looks like the loading of the profile is gated on the version number, as shown by the profile name: "containers-default-0.57.4". I haven't tested this, but I think that if that version changed, then when starting a new container with the new podman, the new profile would be loaded, instead of taking the one already loaded into the kernel.

Perhaps we could mangle that version to incorporate an ubuntu suffix for such cases like this SRU, where we are fixing the apparmor profile?

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

I started a pod with the old podman, which loaded the profile containers-default-0.57.4 as expected:

[Fri Jun 14 20:12:06 2024] audit: type=1400 audit(1718395926.298:139): apparmor="STATUS" operation="profile_load" profile="podman" name="containers-default-0.57.4" pid=1241 comm="apparmor_parser"

Which fails to stop the container (this bug we are dealing with):
[Fri Jun 14 20:12:11 2024] audit: type=1400 audit(1718395931.205:140): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=1277 comm="3" requested_mask="receive" denied_mask="receive" signal=quit peer="crun"

Then I upgraded to podman with this POC patch:
diff --git a/pkg/apparmor/apparmor.go b/pkg/apparmor/apparmor.go
index 146280d..6a09dd4 100644
--- a/pkg/apparmor/apparmor.go
+++ b/pkg/apparmor/apparmor.go
@@ -11,7 +11,7 @@ const (
        ProfilePrefix = "containers-default-"

        // Profile default name
- Profile = ProfilePrefix + version.Version
+ Profile = ProfilePrefix + version.Version + "ubuntu1"
 )

 var (

And the moment I started a new pod:
root@o-podman:~# podman run -d --name foo2 docker.io/library/nginx:latest
acc16efb4482e8dff2d6192a65e74590d0528bad76bee1aba9923c0d2e0b73cf

Notice how a NEW apparmor profile is loaded:
[Fri Jun 14 20:12:31 2024] audit: type=1400 audit(1718395951.153:141): apparmor="STATUS" operation="profile_load" profile="podman" name="containers-default-0.57.4ubuntu1" pid=1730 comm="apparmor_parser"

And this container can be stopped:
root@o-podman:~# podman stop foo2
foo2

But the older foo container I cannot stop:
root@o-podman:~# podman stop foo
WARN[0010] StopSignal SIGQUIT failed to stop container foo in 10 seconds, resorting to SIGKILL
Error: given PID did not die within timeout

Because it has the old apparmor profile loaded.
[Fri Jun 14 20:14:40 2024] audit: type=1400 audit(1718396080.061:142): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.57.4" pid=1803 comm="3" requested_mask="receive" denied_mask="receive" signal=quit peer="crun"

So we could think about doing something with the version component that is part of the apparmor profile.

This wouldn't 100% solve the upgrade problem though, as the loaded profile in the kernel is still the old one, and won't be replaced.

Revision history for this message
Neil Wilson (neil-aldur) wrote : Re: [Bug 2040483] Re: AppArmor denies crun sending signals to containers (stop, kill)

It may be more useful to generate a synthetic replacement definition from
the go file, and then `apparmor_parser -r` the definition if the definition
exists.

On Fri, 14 Jun 2024 at 21:30, Andreas Hasenack <email address hidden>
wrote:

> ** Also affects: libpod (Ubuntu Oracular)
> Importance: Undecided
> Status: Confirmed
>
> ** Also affects: golang-github-containers-common (Ubuntu Oracular)
> Importance: Undecided
> Status: Confirmed
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/2040483
>
> Title:
> AppArmor denies crun sending signals to containers (stop, kill)
>
> To manage notifications about this bug go to:
>
> https://bugs.launchpad.net/ubuntu/+source/golang-github-containers-common/+bug/2040483/+subscriptions
>
>

--
Neil Wilson

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

For some reason I can't assign mantic and noble tasks to golang-github-containers-common, but it also needs fixing there.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package golang-github-containers-common - 0.57.4+ds1-2ubuntu1

---------------
golang-github-containers-common (0.57.4+ds1-2ubuntu1) oracular; urgency=medium

  * d/p/apparmor-Allow-confined-runc-crun-to-kill-containers.patch: patch to
    fix apparmor signal filtering (LP: #2040483)
  * d/p/fix-apparmor-parsing.patch: drop patch no longer needed

 -- Neil Wilson <email address hidden> Wed, 22 May 2024 15:25:54 +0100

Changed in golang-github-containers-common (Ubuntu Oracular):
status: Confirmed → Fix Released
Revision history for this message
Andreas Hasenack (ahasenack) wrote :

Hm, where is my libpod upload? It needs a rebuild after golang-github-containers-common

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

For future SRUs, and uploads, also consider: https://github.com/containers/common/issues/2023

And I filed https://github.com/containers/common/issues/2054 upstream for consideration about the upgrade scenario where the apparmor profile was changed, but the upstream version wasn't.

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

Ok, now it's uploaded to oracular:
Uploading libpod_4.9.4+ds1-1build1.dsc
Uploading libpod_4.9.4+ds1-1build1.debian.tar.xz
Uploading libpod_4.9.4+ds1-1build1_source.buildinfo
Uploading libpod_4.9.4+ds1-1build1_source.changes

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package libpod - 4.9.4+ds1-1build1

---------------
libpod (4.9.4+ds1-1build1) oracular; urgency=medium

  * No change rebuild with new golang-github-containers-common, to pick up
    apparmor fix (LP: #2040483)

 -- Andreas Hasenack <email address hidden> Mon, 17 Jun 2024 15:43:41 -0300

Changed in libpod (Ubuntu Oracular):
status: Confirmed → Fix Released
Revision history for this message
kompas (kompastver) wrote :

Is there any chance the fix will be released for 24.04 as well?

Revision history for this message
Simeon Ehrig (simeonehrig) wrote :

Please back port the fix to Ubuntu 24.04. It is essential feature of podman to stop container.

Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

I can work on this after I finish a couple of other work items I have on my plate right now. I think an estimation would be next month.

If anyone else is willing to fix this before I have the time, I'd happily hand it over to you :)

Revision history for this message
Simeon Ehrig (simeonehrig) wrote :

Thanks for covering the topic.

Unfortunately I have no idea about developing with app armor. I'm only a user but I'm fine with waiting a few weeks. At the moment, it only breaks automatic container updates for me. Therefore I need to it by hand.

Revision history for this message
Guo Le (timguole) wrote :

My k8s cluster faces the same problem on Ubuntu 24.04 with containerd 1.7.12. I searched the
 web and found some info:
1, containerd codes its apparmor profile in go source code.
2, containerd has fixed this issue in recent releases, 1.7.19 or even earlier version. The profile template file now contains two more lines: "signal (receive) peer=runc" and "signal (receive) peer=crun".
3, disabling apparmor and rebooting the system can workaround this problem. At least, my k8s can terminates pod now.

Revision history for this message
Guo Le (timguole) wrote :
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.