juju-check-wait is unhelpful when "juju run" is not working properly
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Mojo: Continuous Delivery for Juju |
Confirmed
|
Medium
|
Unassigned |
Bug Description
Given an environment that is behaving like this:
(stg-pes-
- Error: 'fork/exec /usr/bin/ssh: cannot allocate memory'
MachineId: "1"
Stdout: ""
UnitId: ksplice/1
- Error: 'fork/exec /usr/bin/ssh: cannot allocate memory'
MachineId: "11"
Stdout: ""
UnitId: ksplice/2
- Error: 'fork/exec /usr/bin/ssh: cannot allocate memory'
MachineId: "2"
Stdout: ""
UnitId: ksplice/3
juju-check-wait will simply report e.g.
2017-01-10 01:16:42 [ERROR] ksplice/3 has failed. Insane output from commands.
2017-01-10 01:16:42 [ERROR] ksplice/2 has failed. Insane output from commands.
2017-01-10 01:16:42 [ERROR] ksplice/1 has failed. Insane output from commands.
which is less than helpful.
mojo should log any output it cannot parse (obviously not too useful here, but it might be in other situations) and the value of "Error", if any. Here the latter is a pretty strong hint to experienced Juju operators that some process on the controller node has probably leaked a bunch of memory and need restarting, which was the case here.
Changed in mojo: | |
importance: | Undecided → Medium |
status: | New → Confirmed |