This is/was a fascinating problem which resulted in me instrumenting the PeerHARelationAdapter() class in charms.openstack to find out what was going on. The reason for the problem (in this case) is that the haproxy.cfg is being written with the wrong addresses for the backends. From designate/0 /etc/haproxy/haproxy.cfg: backend designate-api_admin_10.244.8.143 balance leastconn server designate-0 10.244.8.143:8991 check server designate-1 10.246.64.255:8991 check server designate-2 10.246.64.251:8991 check Notice that the addresses for designate-1 and designate-2 are 10.246.64.x and not in 10.244.8.x. This means that 2/3 calls to the designate-api will fail as those addresses are not being listened to on the apache2 servers on those hosts. The ip addresses for designate/1 on the cluster peer relation are (from designate 0): root@juju-226d47-0-lxd-3:/var/lib/juju/agents/unit-designate-0/charm# relation-get -r cluster:6 - designate/1 admin-address: 192.168.33.190 egress-subnets: 10.246.64.255/32 ingress-address: 10.246.64.255 internal-address: 192.168.33.190 private-address: 10.246.64.255 public-address: 10.244.8.150 rndc-address: 192.168.33.190 i.e. the charm is using 'private-address' rather than 'public-address' for designate/1 in the haproxy.cfg So that's what's happening; now to show why: --- The haproxy.cfg is written from the context supplied (in the designate charm) by the PeerHARelationAdapter() adapter class. With instrumented code we see: > PeerHARelationAdapter __init__ > add_network_split_addresses > local_network_split_addresses < local_network_split_addresses: cluster hosts: OrderedDict([('10.244.8.143', {'network': '10.244.8.143/255.255.255.0', 'backends': OrderedDict([('designate-0', '10.244.8.143')])} ), ('192.168.33.183', {'network': '192.168.33.183/255.255.255.128', 'backends': OrderedDict([('designate-0', '192.168.33.183')])})]) > local_network_split_addresses < local_network_split_addresses: cluster hosts: OrderedDict([('10.244.8.143', {'network': '10.244.8.143/255.255.255.0', 'backends': OrderedDict([('designate-0', '10.244.8.143')])} ), ('192.168.33.183', {'network': '192.168.33.183/255.255.255.128', 'backends': OrderedDict([('designate-0', '192.168.33.183')])})]) > local_network_split_addresses < local_network_split_addresses: cluster hosts: OrderedDict([('10.244.8.143', {'network': '10.244.8.143/255.255.255.0', 'backends': OrderedDict([('designate-0', '10.244.8.143')])} ), ('192.168.33.183', {'network': '192.168.33.183/255.255.255.128', 'backends': OrderedDict([('designate-0', '192.168.33.183')])})]) < add_network_split_addresses: OrderedDict([('10.244.8.143', {'network': '10.244.8.143/255.255.255.0', 'backends': OrderedDict([('designate-0', '10.244.8.143'), ('designate-1', '1 0.244.8.150'), ('designate-2', '10.244.8.146')])}), ('192.168.33.183', {'network': '192.168.33.183/255.255.255.128', 'backends': OrderedDict([('designate-0', '192.168.33.183'), ('de signate-1', '192.168.33.190'), ('designate-2', '192.168.33.186')])})]) > add_default_addesses > local_default_addresses < local_default_addresses: returns: {'10.244.8.143': {'network': '10.244.8.143/255.255.255.0', 'backends': OrderedDict([('designate-0', '10.244.8.143')])}} < add_default_addesses .. self.cluster_hosts: OrderedDict([('10.244.8.143', {'network': '10.244.8.143/255.255.255.0', 'backends': OrderedDict([('designate-0', '10.244.8.143'), ('d esignate-1', '10.246.64.255'), ('designate-2', '10.246.64.251')])}), ('192.168.33.183', {'network': '192.168.33.183/255.255.255.128', 'backends': OrderedDict([('designate-0', '192.1 68.33.183'), ('designate-1', '192.168.33.190'), ('designate-2', '192.168.33.186')])})]) < __init_done. The key pat is that add_default_addresses results in self.cluster_hosts as 10.244.8.143, 10.246.64.255, 10.246.64.251 which are precisely the wrong values (at the end of the __init__ function completing). However, "< add_network_split_addresses" did have the right addresses: 'backends': OrderedDict([('designate-0', '10.244.8.143'), ('designate-1', '10.244.8.150'), ('designate-2', '10.244.8.146')]) And in the __init__() function it ends with: if relation: self.add_network_split_addresses() self.add_default_addresses() So it looks like it comes up with the correct addresses and then overwrites it with the wrong addresses. This code hasn't changed in years, so it's probably some other part of the stack (either in charms.reactive or in the interface) that has 'broken' add_default_addresses() such that it overwrites the address. --- Solution, I propose to reverse the order of the calls (add_default_address(), then add_network_split_addresses()) to solve the problem. This does work on the system-under-test during manual debugging.