add-unit causes haproxy to fail

Bug #1840415 reported by Xav Paice
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack HA Cluster Charm
Triaged
Wishlist
Unassigned

Bug Description

hacluster charm rev 53, on Bionic. The cluster_count is set to 3. I had a cluster of Glance units (3), and because I want to remove one, I added a 4th. That caused haproxy to fail on the existing units multiple times and alert, and on the new unit Corosync and Pacemaker failed to start.

Granted, if the cluster_count is 3 we should have 3 units, but we need a clean way to move units around - either add/remove, or remove/add - and it's not entirely clear which we should do.

The fix was simply to restart haproxy on the units where that had failed - the status did report that it wasn't running, but we shouldn't need to do that.

Xav Paice (xavpaice)
affects: charm-haproxy → charm-hacluster
Ryan Beisner (1chb1n)
tags: added: scaleback
Revision history for this message
Ryan Beisner (1chb1n) wrote :

There are known gaps in scaling back / scaling down OpenStack Charms.

The relation-departed Juju primitive is essentially NotImplemented across all of OpenStack Charms. That needs to be addressed on the higher level, as a concerted effort across the lot of charms.

We are tracking this type of issue with LP tag: "scaleback" in order to get a hit list of items to start with, and so that we can identify any common themes or areas where we should focus first.

FYI, https://bugs.launchpad.net/bugs/+bugs?field.tag=scaleback

Changed in charm-hacluster:
importance: Undecided → Wishlist
Changed in charm-hacluster:
status: New → Triaged
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.