When it happened today, both jujud and mongo were taking 100% CPU as well, and there was ~20 load on the node.
jujud also logged ~300MB in about 45 min (machine-0.log).
Once restarted, jujud resumes processing operations, so a few config-changed hooks fired.
Additionally, all the services in this environment also get regularly in the "lost" state. Restarting machine-0 jujud brings them back to their normal "idle" status.
As evarlast asked, is there any known triage command we should run when this happen ? Should I run some commands to extract useful information from the logs ?
Hi,
This is happening to us as well. juju version 1.24.5, machine-0 jujud regularly eats up all the memory. machine-0.log even has :
2015-10-13 06:17:56 DEBUG juju.worker runner.go:203 "diskmanager" done: cannot list block devices: lsblk failed: fork/exec /bin/lsblk: cannot allocate memory
When it happened today, both jujud and mongo were taking 100% CPU as well, and there was ~20 load on the node.
jujud also logged ~300MB in about 45 min (machine-0.log).
Once restarted, jujud resumes processing operations, so a few config-changed hooks fired.
Additionally, all the services in this environment also get regularly in the "lost" state. Restarting machine-0 jujud brings them back to their normal "idle" status.
As evarlast asked, is there any known triage command we should run when this happen ? Should I run some commands to extract useful information from the logs ?
Thanks