2019-04-02 08:33:31 |
Michael Vogt |
description |
We might have a memleak in snapd 2.38+ on install - this could be related to the move from go-1.6 to go-1.10. This was reported by a customer. The data points we have so far:
- happens on refresh for the customer
- on systems with low amounts of memory (256M) this can lead to install/revert failures because e.g. unsquashfs will fail to extract the snap.yaml
- the customer is using a plugin that talks to the socket regularly, maybe we leak there
I did some initial testing that refreshes/installs a bunch of times (20 in this test) to see how that affects the memory. Once for 2.37.4 once for 2.38-git.
The data:
$ snap version
snap 2.37.4+18.04.1
snapd 2.37.4+18.04.1
series 16
ubuntu 18.04
kernel 4.15.0-46-generic
$ sudo systemctl restart snapd ; sudo snap install --dangerous /tmp/hello_20.snap; ps -C snapd -ocmd,rsz,vsz; for i in $(seq 20); do sudo snap install --dangerous /tmp/hello_20.snap ; done ; ps -C snapd -ocmd,rsz,vsz
hello 2.10 installed
CMD RSZ VSZ
/usr/lib/snapd/snapd 31580 1221524
hello 2.10 installed
...
hello 2.10 installed
CMD RSZ VSZ
/usr/lib/snapd/snapd 33952 1297088
and then with:
$ snap version
snap 2.38+git1216.13ed1b8~ubuntu16.04.1
snapd 2.38+git1216.13ed1b8~ubuntu16.04.1
series 16
ubuntu 18.04
kernel 4.15.0-46-generic
$ sudo systemctl restart snapd ; sudo snap install --dangerous /tmp/hello_20.snap; ps -C snapd -ocmd,rsz,vsz; for i in $(seq 20); do sudo snap install --dangerous /tmp/hello_20.snap ; done ; ps -C snapd -ocmd,rsz,vsz
hello 2.10 installed
CMD RSZ VSZ
/usr/lib/snapd/snapd 35832 1226080
hello 2.10 installed
...
hello 2.10 installed
CMD RSZ VSZ
/usr/lib/snapd/snapd 36760 1300068
So this looks like the RSZ size of snapd itself is bigger but it seems like we indeed grow. |
We might have a memleak in snapd 2.38+ on install - this could be related to the move from go-1.6 to go-1.10. This was reported by a customer. The data points we have so far:
- happens on refresh for the customer
- on systems with low amounts of memory (256M) this can lead to install/revert failures because e.g. unsquashfs will fail to extract the snap.yaml
- the customer is using a plugin that talks to the socket regularly, maybe we leak there
I did some initial testing that refreshes/installs a bunch of times (20 in this test) to see how that affects the memory. Once for 2.37.4 once for 2.38-git. One thing to keep in mind is that the changes data will requires a small
amount of memory. So a small growth per install is normal (that the data that
snap changes will display). We prune those after pruneMaxChanges is hit or
after 24h.
The data:
$ snap version
snap 2.37.4+18.04.1
snapd 2.37.4+18.04.1
series 16
ubuntu 18.04
kernel 4.15.0-46-generic
$ sudo systemctl restart snapd ; sudo snap install --dangerous /tmp/hello_20.snap; ps -C snapd -ocmd,rsz,vsz; for i in $(seq 20); do sudo snap install --dangerous /tmp/hello_20.snap ; done ; ps -C snapd -ocmd,rsz,vsz
hello 2.10 installed
CMD RSZ VSZ
/usr/lib/snapd/snapd 31580 1221524
hello 2.10 installed
...
hello 2.10 installed
CMD RSZ VSZ
/usr/lib/snapd/snapd 33952 1297088
and then with:
$ snap version
snap 2.38+git1216.13ed1b8~ubuntu16.04.1
snapd 2.38+git1216.13ed1b8~ubuntu16.04.1
series 16
ubuntu 18.04
kernel 4.15.0-46-generic
$ sudo systemctl restart snapd ; sudo snap install --dangerous /tmp/hello_20.snap; ps -C snapd -ocmd,rsz,vsz; for i in $(seq 20); do sudo snap install --dangerous /tmp/hello_20.snap ; done ; ps -C snapd -ocmd,rsz,vsz
hello 2.10 installed
CMD RSZ VSZ
/usr/lib/snapd/snapd 35832 1226080
hello 2.10 installed
...
hello 2.10 installed
CMD RSZ VSZ
/usr/lib/snapd/snapd 36760 1300068
So this looks like the RSZ size of snapd itself is bigger but it seems like we indeed grow. |
|