starting standard state workers
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Expired
|
Medium
|
Unassigned |
Bug Description
While doing some scale testing, and deploying lots of applications, watching the logs shows *lots* of these entries:
machine-0: 23:37:18 INFO juju.state starting standard state workers
machine-0: 23:37:18 INFO juju.worker start "txnlog"
machine-0: 23:37:18 INFO juju.worker start "presence"
machine-0: 23:37:18 INFO juju.worker start "leadership"
machine-0: 23:37:18 INFO juju.worker start "singular"
machine-0: 23:37:18 INFO juju.state creating cloud image metadata storage
machine-0: 23:37:18 INFO juju.state started state for model-f00a9bea-
machine-0: 23:37:18 INFO juju.worker runner is dying
machine-0: 23:37:18 INFO juju.worker stopped "txnlog", err: <nil>
machine-0: 23:37:18 INFO juju.worker stopped "singular", err: <nil>
machine-0: 23:37:18 INFO juju.worker stopped "presence", err: <nil>
machine-0: 23:37:18 INFO juju.worker stopped "leadership", err: <nil>
This may be creating a State object for each application deployed, or some other oddity. Either way, we shouldn't be trying to start and then stop 4+ workers every second in the log file.
I was triggering this while doing:
for x in `seq -f "%04g" 1 5`; do echo $x; for m in `seq 0 4`; do juju deploy -m m-$m cs:~jameinel/
Which is deploying a new application into 5 different models concurrently. In a loop 5 times.
Changed in juju: | |
milestone: | 2.2-beta3 → 2.2-beta4 |
Changed in juju: | |
milestone: | 2.2-beta4 → 2.2-rc1 |
I'm curious if this still happens as the code should be using elements from the state pool.
The only problem would be if the tests are creating new models, but if they aren't, we shouldn't be getting this.