Charms stays blocked: Vault service not running

Bug #2008005 reported by Bas de Bruijne
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
vault-charm
New
Undecided
Unassigned

Bug Description

Test run https://solutions.qa.canonical.com/v2/testruns/18abe5d1-6d6f-4066-bb81-77cff4ad25e1 (which is yoga focal), vault stays blocked with the message `Vault service not running`. In the vault logs there are some errors from pacemaker about missing STONITH resources:

```
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-attrd [16053] (attrd_cib_callback) info: * shutdown[vault-1]=1676697190
pacemaker/pacemaker.log:Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (unpack_resources) error: Resource start-up disabled since no STONITH resources have been defined
pacemaker/pacemaker.log:Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (unpack_resources) error: Either configure some or disable STONITH with the stonith-enabled option
pacemaker/pacemaker.log:Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (unpack_resources) error: NOTE: Clusters with shared data need STONITH to ensure data integrity
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (determine_online_status) info: Node vault-1 is shutting down
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (unpack_node_loop) info: Node 1000 is already processed
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (unpack_node_loop) info: Node 1000 is already processed
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (stage6) notice: Delaying fencing operations until there are resources to manage
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (sched_shutdown_op) notice: Scheduling shutdown of node vault-1
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (LogNodeActions) notice: * Shutdown vault-1
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (pcmk__log_transition_summary) notice: Calculated transition 1, saving inputs in /var/lib/pacemaker/pengine/pe-input-1.bz2
pacemaker/pacemaker.log:Feb 18 05:13:10 vault-1 pacemaker-schedulerd[16054] (pcmk__log_transition_summary) notice: Configuration errors found during scheduler processing, please run "crm_verify -L" to identify issues
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-controld [16055] (do_state_transition) info: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-controld [16055] (do_te_invoke) info: Processing graph 1 (ref=pe_calc-dc-1676697190-10) derived from /var/lib/pacemaker/pengine/pe-input-1.bz2
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-controld [16055] (te_crm_command) info: Executing crm-event (1) locally without waiting: do_shutdown on vault-1
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-controld [16055] (te_crm_command) info: crm-event (1) is a local shutdown
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-controld [16055] (run_graph) notice: Transition 1 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-controld [16055] (do_log) info: Input I_STOP received in state S_TRANSITION_ENGINE from notify_crmd
pacemaker/pacemaker.log-Feb 18 05:13:10 vault-1 pacemaker-controld [16055] (do_state_transition) info: State transition S_TRANSITION_ENGINE -> S_STOPPING | input=I_STOP cause=C_FSA_INTERNAL origin=notify_crmd
```

The debug-log of the vault unit itself looks normal, I'm not sure what the issue exactly is.

Logs and configs can be found here:
https://oil-jenkins.canonical.com/artifacts/18abe5d1-6d6f-4066-bb81-77cff4ad25e1/index.html

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.