HA and backup recovery tests failed
| Affects | Status | Importance | Assigned to | Milestone | |
|---|---|---|---|---|---|
| juju-core |
Critical
|
Unassigned | |||
Bug Description
Build: #2581 Revision: gitbranch:
http://
Failed tests
aws-deployer-bundle build #267 http://
functional-
functional-
functional-
functional-
The functional-
| description: | updated |
| Ian Booth (wallyworld) wrote : | #1 |
| Ian Booth (wallyworld) wrote : | #2 |
The stock functional-
| Horacio Durán (hduran-8) wrote : | #3 |
I agree with Ian, the new status indicates it is still doing something (although I was under the impression that cs:ubuntu was a shallow charm) so it seems to me it is a matter of slownes of the system.
All machines log and the unit's log say everything went ok too.
The unit log though seems to imply that the charm is indeed installed properly so I wonder if there is not something broken with the charm itself.
| Ian Booth (wallyworld) wrote : | #4 |
I think this is actually a duplicate of bug 1450917
Some logic to transition the reported workload state from installing wasn't being run because missing hooks were not causing the correct code path to be executed.


The deployer bundle test logs show:
containers:
agent- state-info: 'failed to retrieve the template to clone: template container
"juju- trusty- lxc-template" did not stop'
instance- id: pending
agent- state-info: 'lxc container cloning failed: cannot clone a running container'
instance- id: pending
2/lxc/0:
series: trusty
2/lxc/1:
series: trusty
This is already reported as bug 1441319 (may be the same issue). When this has happened in the past, logging onto the node afterwards shows that the template containers did eventually stop, but not in time for juju before it gave up. Juju waits 5 minutes, but if the host node is I/O bound or has other similar issues, the container takes too long to run cloud init and then shutdown.