Workload cannot access the service account token on Juju 3.5.0

Bug #2066517 reported by Marcelo Henrique Neppel
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
Critical
Harry Pidcock

Bug Description

The workload from a K8s charm cannot access the service account token on Juju 3.5.0. It works fine until Juju 3.4.2.

Error message from pebble logs (when deploying `juju deploy postgresql-k8s --channel 14/edge --trust`, which runs the workload with another user - postgres - and then checking the error through `pebble logs`):

2024-05-22T20:35:21.920Z [postgresql] PermissionError: [Errno 13] Permission denied: '/var/run/secrets/kubernetes.io/serviceaccount/token'

3.4.2 permissions:

root@postgresql-k8s-0:/# ls -al /var/run/secrets/kubernetes.io/serviceaccount/token
lrwxrwxrwx 1 root root 12 May 22 14:38 /var/run/secrets/kubernetes.io/serviceaccount/token -> ..data/token

root@postgresql-k8s-0:/# ls -al /var/run/secrets/kubernetes.io/serviceaccount/..data/token
-rw-r--r-- 1 root root 977 May 22 14:38 /var/run/secrets/kubernetes.io/serviceaccount/..data/token

3.5.0 permissions:

root@postgresql-k8s-0:/# ls -al /var/run/secrets/kubernetes.io/serviceaccount/token
lrwxrwxrwx 1 root 170 12 May 22 14:04 /var/run/secrets/kubernetes.io/serviceaccount/token -> ..data/token

root@postgresql-k8s-0:/# ls -al /var/run/secrets/kubernetes.io/serviceaccount/..data/token
-rw-r----- 1 root 170 1142 May 22 14:04 /var/run/secrets/kubernetes.io/serviceaccount/..data/token

Others cannot access the token anymore.

Revision history for this message
Marcelo Henrique Neppel (neppel) wrote :

One way to deploy the charm was to add a name to group 170 and use that group in the pebble layer for the service, which needed access to the service account token. However, it doesn't seem to be the proper way to fix this.

Do you have any recommendations on how to handle this situation? Thanks in advance.

tags: added: canonical-data-platform-eng
description: updated
Harry Pidcock (hpidcock)
Changed in juju:
assignee: nobody → Harry Pidcock (hpidcock)
importance: Undecided → Critical
milestone: none → 3.5.1
status: New → In Progress
Revision history for this message
John A Meinel (jameinel) wrote :

Harry said he found this in the upstream source:
```
  case source.ServiceAccountToken != nil:
   tp := source.ServiceAccountToken

   // When FsGroup is set, we depend on SetVolumeOwnership to
   // change from 0600 to 0640.
   mode := *s.source.DefaultMode
   if mounterArgs.FsUser != nil || mounterArgs.FsGroup != nil {
    mode = 0600
   }
```

Revision history for this message
Harry Pidcock (hpidcock) wrote :

I identified three issues yesterday:
1. The charm I was looking at, kubeflow-dashboard was very confusing because the charm did not specify to run the application as the _daemon_ user. But it was running via `npm run` which in older versions used to fork/exec as the user of the root directory of the node module, so a lot of time was spent discovering this issue there.
2. fsGroup security context affects the default mode of a mounted service account, this is bespoke logic not documented in k8s, and only affects service account mounts. Very opinionated.
3. If all the containers in a pod use the same explicit user in their securityContext (either at the pod level securityContext or the container level securityContext), then the service account mounts the files only readable by the user (no r-flag for group or other).

The fix is to maintain the previous behaviour of not specifying the runAsUser/fsGroup for the pod/containers.

Revision history for this message
Harry Pidcock (hpidcock) wrote :
Harry Pidcock (hpidcock)
Changed in juju:
status: In Progress → Fix Committed
Revision history for this message
Marcelo Henrique Neppel (neppel) wrote :

Thanks, Harry! I tested the fix by using 3.5.1 from the candidate risk, and it's working fine.

Changed in juju:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.