Removing a unit from cluster doesn't fully remove the unit from Pacemaker
Bug #1806505 reported by
Xav Paice
This bug report is a duplicate of:
Bug #1400481: Removing unit from hacluster doesn't properly remove node from corosync.
Edit
Remove
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Percona Cluster Charm |
Triaged
|
Low
|
Unassigned |
Bug Description
I deployed a cluster of 3 percona-cluster units, then removed and added a unit twice.
This resulted in the pacemaker/corosync cluster being configured with 5 machines, rather than only 3. The two that were removed from Juju were not removed from the Pacemaker configs, and even after hand editing the configs, running the config-changed hook put them back there.
Xenial, Queens, and 18.08 charms.
To post a comment you must log in.
tl;dr they get removed from the configuration file on disk, however they persist in the corosync messaging state until manually purged right now. I've seen this in our QA deployment as well when we've been moving services around for the control plane. Functionally there should be no impact however it is a bit ugly.