gvfs-disks2-volume-monitor and gsd-housekeeping processes can eat a lot of CPU with k3s workload

Bug #2047356 reported by Mossroy
30
This bug affects 6 people
Affects Status Importance Assigned to Milestone
gvfs (Ubuntu)
Confirmed
Undecided
Unassigned

Bug Description

On Ubuntu 22.04.3 desktop, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core.
Even if the actual k3s workload is idle.

Steps To Reproduce:

- Use or install a desktop Ubuntu 22.04.3 (with default settings)
- Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -"
- Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml"
- Check CPU consumption on the host, with top, gnome-system-monitor or anything else

Expected behavior:
Gnome desktop tools should not interfere with k3s.

Actual behavior:
Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time.
Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-with-many-volumes.yaml"), until the PVs are deleted by k3s.
I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running (even if idle)

Additional context:
The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help.

Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround

Technical details:
k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers.
I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker

NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093

Mossroy (mossroy)
description: updated
description: updated
Mossroy (mossroy)
description: updated
description: updated
description: updated
description: updated
Revision history for this message
Mossroy (mossroy) wrote :

I have the same behavior with Ubuntu 23.10.1 (with all current updates), using latest stable k3s (v1.28.5+k3s1)

Mossroy (mossroy)
description: updated
Revision history for this message
Mossroy (mossroy) wrote :

Same behavior on Ubuntu 24.04 daily (2024-01-20, with Gnome 46), and on Fedora Workstation 39 (with Gnome 45)

Revision history for this message
Patryk Skorupa (skoruppa) wrote :

I have the same issue. It is hard to work with docker as those two processes consume almost 70% of my cpu. The udev rule does not help me :/

Revision history for this message
Mossroy (mossroy) wrote :

Thanks for your feedback, I feel less alone.
Can you mark that this issue affects you at the top of this page (under the issue description)?

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in gvfs (Ubuntu):
status: New → Confirmed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.