Activity log for bug #2047356

Date Who What changed Old value New value Message
2023-12-25 11:32:02 Mossroy bug added bug
2023-12-25 15:57:54 Mossroy description On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-wit-many-volumes.yaml && sudo k3s kubectl apply -f deployment-wit-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running. Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing systemctl stop --user gvfs-udisks2-volume-monitor can be a temporary workaround NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093 On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-wit-many-volumes.yaml && sudo k3s kubectl apply -f deployment-wit-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running. Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing systemctl stop --user gvfs-udisks2-volume-monitor can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/b96a0909f0ebc683de817665ff090d57ced6f981/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093
2023-12-25 15:58:29 Mossroy description On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-wit-many-volumes.yaml && sudo k3s kubectl apply -f deployment-wit-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running. Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing systemctl stop --user gvfs-udisks2-volume-monitor can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/b96a0909f0ebc683de817665ff090d57ced6f981/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093 On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-wit-many-volumes.yaml && sudo k3s kubectl apply -f deployment-wit-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running. Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/b96a0909f0ebc683de817665ff090d57ced6f981/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093
2023-12-25 16:26:38 Mossroy description On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-wit-many-volumes.yaml && sudo k3s kubectl apply -f deployment-wit-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running. Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/b96a0909f0ebc683de817665ff090d57ced6f981/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093 On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running. Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093
2023-12-25 16:27:50 Mossroy description On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running. Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093 On Ubuntu 22.04.3 desktop, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running. Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093
2023-12-25 16:29:52 Mossroy description On Ubuntu 22.04.3 desktop, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running. Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093 On Ubuntu 22.04.3 desktop, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running (even if idle) Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093
2023-12-25 16:30:26 Mossroy description On Ubuntu 22.04.3 desktop, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running (even if idle) Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093 On Ubuntu 22.04.3 desktop, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running (even if idle) Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093
2024-01-20 14:01:45 Mossroy description On Ubuntu 22.04.3 desktop, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running (even if idle) Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093 On Ubuntu 22.04.3 desktop, when running a k3s workload that uses volumes (using default local-path storageClass), process gvfs-disks2-volume-monitor can take around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU core. Even if the actual k3s workload is idle. Steps To Reproduce: - Use or install a desktop Ubuntu 22.04.3 (with default settings) - Install K3s on it (current version is "v1.28.4+k3s2"), with default settings: "curl -sfL https://get.k3s.io | sh -" - Deploy k8s manifests with many volumes, like https://gitlab.com/-/snippets/3634487: "wget https://gitlab.com/-/snippets/3634487/raw/main/deployment-with-many-volumes.yaml && sudo k3s kubectl apply -f deployment-with-many-volumes.yaml" - Check CPU consumption on the host, with top, gnome-system-monitor or anything else Expected behavior: Gnome desktop tools should not interfere with k3s. Actual behavior: Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of CPU, at least at provisioning time. Same CPU consumption if you then remove the workload ("sudo k3s kubectl delete -f deployment-with-many-volumes.yaml"), until the PVs are deleted by k3s. I have other workloads (with data in PVs) where this CPU consumption is always there, when the workload is running (even if idle) Additional context: The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, but the workaround of comment https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev rule to ignore some loopback devices) does not help. Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a temporary workaround Technical details: k3s uses containerd to run containers. The local-path storageClass mounts local volumes (physically stored in /var/lib/rancher/k3s/storage subfolders) in these containers. I suppose gnome applications try to scan these mount points. In this case, the solution might be to make them ignore them, a bit like https://github.com/moby/moby/blob/master/contrib/udev/80-docker.rules does for docker NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093
2024-02-05 10:42:34 Patryk Skorupa bug added subscriber Patryk Skorupa
2024-02-06 10:19:59 Launchpad Janitor gvfs (Ubuntu): status New Confirmed
2024-02-06 19:58:17 Alexander Kabakaev bug added subscriber Alexander Kabakaev