Full List of Resources:
* Resource Group: grp_aodh_vips:
* res_aodh_272179f_vip (ocf:heartbeat:IPaddr2): Started juju-e310e9-1-lxd-0
* res_aodh_8486566_vip (ocf:heartbeat:IPaddr2): Started juju-e310e9-1-lxd-0
* Clone Set: cl_res_aodh_haproxy [res_aodh_haproxy]:
* Started: [ juju-e310e9-0-lxd-0 juju-e310e9-1-lxd-0 juju-e310e9-2-lxd-0 ]
So both vips are configured and working but the resource name has changed.
$ ping -c1 10.246.172.210
PING 10.246.172.210 (10.246.172.210) 56(84) bytes of data.
64 bytes from 10.246.172.210: icmp_seq=1 ttl=62 time=1.04 ms
--- 10.246.172.210 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.041/1.041/1.041/0.000 ms
$ ping -c1 10.246.168.210
PING 10.246.168.210 (10.246.168.210) 56(84) bytes of data.
64 bytes from 10.246.168.210: icmp_seq=1 ttl=63 time=0.613 ms
--- 10.246.168.210 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms
I think the issue is that the charm sets vip_iface by default to eth0 and that causes this issue. TBH I thought the vip_iface option had been removed long ago.
In the deployment I looked at two vips were requested:
juju config aodh vip
10.246.172.210 10.246.168.210
aodh translated this into a request for two different resources (from ha relation data0:
"res_aodh_ 8486566_ vip": " params ip=\"10. 246.168. 210\" meta migration- threshold= \"INFINITY\ " failure- timeout= \"5s\" op monitor timeout=\"20s\" eth0_vip" : " params ip=\"10. 246.172. 210\" nic=\"eth0\" cidr_netmask=\"24\" meta migration- threshold= \"INFINITY\ " failure- timeout= \"5s\" op
interval=\"10s\" depth=\"0\"",
"res_aodh_
monitor timeout=\"20s\" interval=\"10s\" depth=\"0\"",
...
The hacluster charm then creates the new resources *but* with a new name for res_aodh_eth0_vip:
$ sudo crm status
Cluster Summary:
* Stack: corosync
* Current DC: juju-e310e9-1-lxd-0 (version 2.1.2-ada5c3b36e2) - partition with quorum
* Last updated: Fri Sep 2 14:57:12 2022
* Last change: Thu Sep 1 02:22:01 2022 by hacluster via crmd on juju-e310e9-1-lxd-0
* 3 nodes configured
* 5 resource instances configured
Node List:
* Online: [ juju-e310e9-0-lxd-0 juju-e310e9-1-lxd-0 juju-e310e9-2-lxd-0 ]
Full List of Resources: 272179f_ vip (ocf:heartbeat: IPaddr2) : Started juju-e310e9-1-lxd-0 8486566_ vip (ocf:heartbeat: IPaddr2) : Started juju-e310e9-1-lxd-0
* Resource Group: grp_aodh_vips:
* res_aodh_
* res_aodh_
* Clone Set: cl_res_aodh_haproxy [res_aodh_haproxy]:
* Started: [ juju-e310e9-0-lxd-0 juju-e310e9-1-lxd-0 juju-e310e9-2-lxd-0 ]
So both vips are configured and working but the resource name has changed.
$ ping -c1 10.246.172.210
PING 10.246.172.210 (10.246.172.210) 56(84) bytes of data.
64 bytes from 10.246.172.210: icmp_seq=1 ttl=62 time=1.04 ms
--- 10.246.172.210 ping statistics --- 041/1.041/ 0.000 ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.041/1.
$ ping -c1 10.246.168.210
PING 10.246.168.210 (10.246.168.210) 56(84) bytes of data.
64 bytes from 10.246.168.210: icmp_seq=1 ttl=63 time=0.613 ms
--- 10.246.168.210 ping statistics --- 613/0.613/ 0.000 ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.613/0.
I think the issue is that the charm sets vip_iface by default to eth0 and that causes this issue. TBH I thought the vip_iface option had been removed long ago.