Comment 29 for bug 1972159

Revision history for this message
Tim Richardson (tim-richardson) wrote :

For me, systemd-oomd no longer kills at all. The memory pressure threshold is still active, but I think the default of 50% on the user slice is way too high. I can put a 4gb test VM under extreme memory load and get so much swap activity that CPU load in a two core VM gets > 50, yet the memory pressure score is 14%. I can not conceive of what type of load would get it to 50%.

I have set the user slice threshold to 10%, and when I attempt to load 100 tabs, the browser is killed a couple of minutes after memory and swap is exhausted. It's not an aggressive kill, but it lets systemd-oomd actually kill something.

So far it has only ever killed the guilty app. I think if the aim is not have systemd-oomd ever kill anything, 50% memory threshold and swap kill off achieves the goal, but if you want it to kill baes on memory pressure, the memory threshold needs to be much lower. killing on memory pressure was supposed to be one of the great things about systemd-oomd, I thought.

I note the systemd-cgtop shows there are many tasks under the user slice (I have about 400 when idle, and about 1200 when the brower is trying to load all those tabs). All the system slices have < 5 tasks. So one or two of those processes being stalled will result is a steep increase in memory pressure KPI. But perhaps with so many tasks in the user slice, the KPI is highly "diluted" and needs a much lower threshold to be meaningful.

Maybe this is all very different on a raspberry PI.