juju 1.25 leaks memory (1.25.11+)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
juju-core |
Fix Released
|
High
|
Andrew Wilkins |
Bug Description
Juju 1.25.12 (current latest) leaks memory over time.
I have deployed a test environment against local (test) MAAS cluster - there are three physical nodes, each hosting 48 containers (totaling 144 units).
I have deployed 16 3-node MongoDB replicasets - each replicaset is standalone, not related to anything (excluding 'internal' peer relations).
Then I have 16 3-node percona-cluster with hacluster subordinates, each of those 3-node clusters related to 3-node keystone, also with hacluster subordinate.
I have attached juju status output to this bug report.
I deployed munin and munin-node on bootstrap node and configured meminfo plugin to track memory for jujud and mongod processes. Munin graphs reside in my test-environment, but I will copy then on a public www server, for convenience.
tags: | added: canonical-bootstack |
tags: | added: sts |
Changed in juju-core: | |
status: | New → Triaged |
importance: | Undecided → High |
milestone: | none → 1.25.13 |
Changed in juju-core: | |
status: | Fix Committed → Fix Released |
The munin graphs for memory usage can be seen here:
http:// people. canonical. com/~mario/ lp1701481/ www/localdomain /localhost. localdomain/ index.html# memory
The environment was deployed on 22nd (check the graph on the right side), and it's been left to sit 'without touching' since.
On 28th there is interruption in graphs because I run out of disk space - I misconfigured the 'juju heap profile collector' (simple script that periodically collect juju heap profiles: https:/ /code.launchpad .net/~mariospli valo/+junk/ juju-profiles- collector).
On 29th I restarted the jujud.
I am attaching the machine-0.log file, as well as compressed profiles collected, to this bug.