add-unit causes haproxy to fail
Bug #1840415 reported by
Xav Paice
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack HA Cluster Charm |
Triaged
|
Wishlist
|
Unassigned |
Bug Description
hacluster charm rev 53, on Bionic. The cluster_count is set to 3. I had a cluster of Glance units (3), and because I want to remove one, I added a 4th. That caused haproxy to fail on the existing units multiple times and alert, and on the new unit Corosync and Pacemaker failed to start.
Granted, if the cluster_count is 3 we should have 3 units, but we need a clean way to move units around - either add/remove, or remove/add - and it's not entirely clear which we should do.
The fix was simply to restart haproxy on the units where that had failed - the status did report that it wasn't running, but we shouldn't need to do that.
affects: | charm-haproxy → charm-hacluster |
tags: | added: scaleback |
Changed in charm-hacluster: | |
status: | New → Triaged |
To post a comment you must log in.
There are known gaps in scaling back / scaling down OpenStack Charms.
The relation-departed Juju primitive is essentially NotImplemented across all of OpenStack Charms. That needs to be addressed on the higher level, as a concerted effort across the lot of charms.
We are tracking this type of issue with LP tag: "scaleback" in order to get a hit list of items to start with, and so that we can identify any common themes or areas where we should focus first.
FYI, https:/ /bugs.launchpad .net/bugs/ +bugs?field. tag=scaleback