Memory leaking when running kubernetes cronjobs
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
| linux (Ubuntu) |
High
|
Unassigned | ||
| Bionic |
High
|
Unassigned | ||
| Cosmic |
High
|
Unassigned | ||
| linux-azure (Ubuntu) |
High
|
Unassigned |
Bug Description
We are using Kubernetes V1.8.15 with docker 18.03.1-ce.
We schedule 50 Kubernetes cronjobs to run every 5 minutes. Each cronjob will create a simple busybox container, echo hello world, then terminate.
In the data attached to the bug I let this run for 1 hour, and in this time the Available memory had reduced from 31256704 kB to 30461224 kB - so a loss of 776 MB. From previous longer runs we observe the available memory continues to drop.
There doesn't appear to be any processes left behind, or any growth in any other processes to explain where the memory has gone.
echo 3 > /proc/sys/
We are currently running Ubuntu 4.15.0-
The leak was more severe on the Debian system, and investigations there showed leaks in pcpu_get_vm_areas and were related to memory cgroups. Running with Kernel 4.17 on debian showed a leak at a similar rate to what we now observe on Ubuntu 18. This leak causes us issues as we need to run the cronjobs regularly and want the systems to remain up for months.
Kubernetes will create a new cgroup each time the cronjob runs, but these are removed when the job completes (which takes a few seconds). If I use systemd-cgtop I don't see any increase in cgroups over time - but if I monitor /proc/cgroups over time I can see num_cgroups for memory increases.
For the duration of the test I collected slabinfo, meminfo, vmallocinfo & cgroups - which I will attach to the bug. Each file is suffixed with the number of seconds since the start.
*.0 & *.600 were taken before the test was started. The test was stopped shortly after the *.4200 files were generated. I then left the system idle for 10 minutes. I then ran echo 3 > /proc/sys/
Note, the data attached is from running on kernel 4.18.7-
*** Problem in linux-image-
The problem cannot be reported:
This report is about a package that is not installed.
So I switched back to 4.15.0-
ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: linux-image-
ProcVersionSign
Uname: Linux 4.15.0-32-generic x86_64
AlsaDevices:
total 0
crw-rw---- 1 root audio 116, 1 Sep 13 08:55 seq
crw-rw---- 1 root audio 116, 33 Sep 13 08:55 timer
AplayDevices: Error: [Errno 2] No such file or directory: 'aplay': 'aplay'
ApportVersion: 2.20.9-0ubuntu7.2
Architecture: amd64
ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord': 'arecord'
AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', '/dev/snd/timer'] failed with exit code 1:
CRDA: N/A
Date: Thu Sep 13 08:55:46 2018
IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig': 'iwconfig'
Lsusb:
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd
Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
MachineType: Xen HVM domU
PciMultimedia:
ProcEnviron:
LANG=C.UTF-8
SHELL=/bin/bash
TERM=xterm
PATH=(custom, no user)
ProcFB:
ProcKernelCmdLine: BOOT_IMAGE=
RelatedPackageV
linux-
linux-
linux-firmware N/A
RfKill: Error: [Errno 2] No such file or directory: 'rfkill': 'rfkill'
SourcePackage: linux
UpgradeStatus: No upgrade log present (probably fresh install)
WifiSyslog:
dmi.bios.date: 08/13/2018
dmi.bios.vendor: Xen
dmi.bios.version: 4.7.5-1.21
dmi.chassis.type: 1
dmi.chassis.vendor: Xen
dmi.modalias: dmi:bvnXen:
dmi.product.name: HVM domU
dmi.product.
dmi.sys.vendor: Xen
Daniel McGinnes (djmcginnes) wrote : | #1 |
Changed in linux (Ubuntu): | |
status: | New → Confirmed |
tags: | added: xenial |
Joseph Salisbury (jsalisbury) wrote : | #3 |
Would it be possible for you to test the latest upstream kernel? Refer to https:/
If this bug is fixed in the mainline kernel, please add the following tag 'kernel-
If the mainline kernel does not fix this bug, please add the tag: 'kernel-
Once testing of the upstream kernel is complete, please mark this bug as "Confirmed".
Thanks in advance.
tags: | added: kernel-da-key |
Daniel McGinnes (djmcginnes) wrote : | #4 |
I re-ran on 4.19.0-
tags: | added: kernel-bug-exists-upstream |
Joseph Salisbury (jsalisbury) wrote : | #5 |
This issue appears to be an upstream bug, since you tested the latest upstream kernel. Would it be possible for you to open an upstream bug report[0]? That will allow the upstream Developers to examine the issue, and may provide a quicker resolution to the bug.
Please follow the instructions on the wiki page[0]. The first step is to email the appropriate mailing list. If no response is received, then a bug may be opened on bugzilla.
Once this bug is reported upstream, please add the tag: 'kernel-
Changed in linux (Ubuntu): | |
importance: | Undecided → Medium |
status: | Confirmed → Incomplete |
status: | Incomplete → Triaged |
tags: | added: kernel-bug-reported-upstream |
Daniel McGinnes (djmcginnes) wrote : | #6 |
Bug reported - link to email is here ->
https:/
I got a pretty positive response:
Thank you for the very detailed and full report!
We've experienced the same (or very similar problem), when memory cgroups
were staying in the dying state for a long time, so that the number of
dying cgroups grew steadily with time.
I've investigated the issue and found several problems in the memory
reclaim and accounting code.
The following commits from the next tree are solving the problem in our case:
010cb21d4ede math64: prevent double calculation of DIV64_U64_
f77d7a05670d mm: don't miss the last page because of round-off error
d18bf0af683e mm: drain memcg stocks on css offlining
71cd51b2e1ca mm: rework memcg kernel stack accounting
f3a2fccbce15 mm: slowly shrink slabs with a relatively small number of objects
Changed in linux (Ubuntu Bionic): | |
status: | New → Triaged |
importance: | Undecided → Medium |
Changed in linux (Ubuntu): | |
importance: | Medium → High |
Changed in linux (Ubuntu Bionic): | |
importance: | Medium → High |
tags: |
added: kernel-key removed: kernel-da-key |
tags: |
added: kernel-da-key removed: kernel-key |
Daniel McGinnes (djmcginnes) wrote : | #7 |
Hi, as per this update -> https:/
I have a set of patches on top of Kernel 4.19.rc3 that appear to resolve the issue. What is the process for getting these backported to a 4.15 Kernel build for Ubuntu 18?
The list of patches is:
https:/
https:/
010cb21d4ede math64: prevent double calculation of DIV64_U64_
arguments
f77d7a05670d mm: don't miss the last page because of round-off error
d18bf0af683e mm: drain memcg stocks on css offlining
71cd51b2e1ca mm: rework memcg kernel stack accounting
f3a2fccbce15 mm: slowly shrink slabs with a relatively small number of
objects
Daniel McGinnes (djmcginnes) wrote : | #8 |
Hi, any update on what needs to happen to get these patches backported to a 4.15 Kernel build for Ubuntu 18?
Joseph Salisbury (jsalisbury) wrote : | #9 |
Because there are a few and they affect the memory management layer, it might be best to submit these patches to the Ubuntu Kernel Team mailing list for feedback.
<email address hidden>
Joshua R. Poulson (jrp) wrote : | #10 |
This issue also exists on the Linux-azure kernel series.
Dexuan Cui (decui) wrote : | #11 |
More patches are required: https:/
It looks we'll have to wait for some time, before the kernel stabilizes...
Changed in linux-azure (Ubuntu): | |
status: | New → Triaged |
Changed in linux-azure (Ubuntu Bionic): | |
status: | New → Triaged |
Changed in linux-azure (Ubuntu Cosmic): | |
status: | New → Triaged |
no longer affects: | linux-azure (Ubuntu Cosmic) |
no longer affects: | linux-azure (Ubuntu Bionic) |
Changed in linux-azure (Ubuntu): | |
importance: | Undecided → High |
tags: | added: cscc |
This change was made by a bot.