yeah, I believe this is the issue: hacluster-placement/1 didn't get to see all the peers joining the hanode relation: $ grep hacluster-placement machine-lock.log | grep -v update-status | grep 'unit: hacluster' 2022-04-16 12:54:44 unit-hacluster-placement-1: hacluster-placement/1 uniter (run relation-joined (25; unit: hacluster-placement/0) hook), waited 23s, held 5s 2022-04-16 12:55:14 unit-hacluster-placement-1: hacluster-placement/1 uniter (run relation-changed (25; unit: hacluster-placement/0) hook), waited 24s, held 6s 2022-04-16 12:56:38 unit-placement-1: placement/1 uniter (run relation-joined (249; unit: hacluster-placement/1) hook), waited 19s, held 5s 2022-04-16 12:57:13 unit-hacluster-placement-1: hacluster-placement/1 uniter (run relation-changed (25; unit: hacluster-placement/0) hook), waited 12s, held 5s 2022-04-16 12:59:48 unit-placement-1: placement/1 uniter (run relation-changed (249; unit: hacluster-placement/1) hook), waited 9s, held 8s 2022-04-16 13:01:12 unit-nrpe-42: nrpe/42 uniter (run relation-joined (250; unit: hacluster-placement/1) hook), waited 13s, held 11s 2022-04-16 13:01:35 unit-nrpe-42: nrpe/42 uniter (run relation-changed (250; unit: hacluster-placement/1) hook), waited 12s, held 11s Also this unit wasn't the leader: 2022-04-16 12:46:59 DEBUG juju.worker.uniter.relation statetracker.go:221 unit "hacluster-placement/1" (leader=false) entered scope for relation "hacluster-placement:hanode" This prevents from running this piece of code[0]: ... if configure_corosync(): try_pcmk_wait() if is_leader(): run_initial_setup() #<---!! ... the function run_initial_setup() is the one in charge of disabling stonith[1] and due to the peers relation described above this unit was being configured as a single node cluster: $ journalctl --file ../journal/660a5f12c3b64778a026dc895e3d6c09/system.journal | grep 'adding new UDPU member' abr 16 08:46:15 juju-d527c3-1-lxd-9 corosync[27385]: [TOTEM ] adding new UDPU member {10.246.165.57} abr 16 08:51:34 juju-d527c3-1-lxd-9 corosync[35698]: [TOTEM ] adding new UDPU member {10.246.165.57} ^ that's the journal file for the machine 1/lxd/9, and in comparison this is how that grep line looks like for nova-cloud-controller/0: $ journalctl --file journal/c5b4fcd7022748cda9f41e1f6f6df7b9/system.journal | grep 'adding new UDPU member' abr 16 08:46:23 juju-d527c3-0-lxd-7 corosync[30678]: [TOTEM ] adding new UDPU member {10.246.164.247} abr 16 08:51:50 juju-d527c3-0-lxd-7 corosync[39528]: [TOTEM ] adding new UDPU member {10.246.164.247} abr 16 08:56:48 juju-d527c3-0-lxd-7 corosync[48186]: [TOTEM ] adding new UDPU member {10.246.167.82} abr 16 08:56:48 juju-d527c3-0-lxd-7 corosync[48186]: [TOTEM ] adding new UDPU member {10.246.165.194} abr 16 08:56:48 juju-d527c3-0-lxd-7 corosync[48186]: [TOTEM ] adding new UDPU member {10.246.164.247} So I believe this is not a charm's issue, but a juju issue, not necessarily a bug, but this could be more related to the hooks still being processed in the queue. [0] https://github.com/openstack/charm-hacluster/blob/master/hooks/hooks.py#L218-L221 [1] https://github.com/openstack/charm-hacluster/blob/master/hooks/hooks.py#L182