fw01/02 have bond0.21 that is setup to have fe80::1 as the VIP used as the network gateway:
root@fw01:~# ip -6 a show bond0.21
8: bond0.21@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 2620:a:b:21::2/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::210:18ff:fe77:b558/64 scope link
valid_lft forever preferred_lft forever
# fw02 is currently primary/master
root@fw02:~# ip -6 a show bond0.21
8: bond0.21@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 2620:a:b:21::3/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::1/64 scope link nodad
valid_lft forever preferred_lft forever
inet6 fe80::210:18ff:febe:6750/64 scope link
valid_lft forever preferred_lft forever
and radvd only runs on the primary/master fw (02 ATM).
After a failover, Bionic clients using netplan/systemd-networkd will have a bogus default nexthop like this:
$ ip -6 ro
2620:a:b:21::/64 dev eth0 proto ra metric 1024 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default proto ra metric 1024
nexthop via fe80::1 dev eth0 weight 1
nexthop via fe80::210:18ff:fe77:b558 dev eth0 weight 1
nexthop via fe80::210:18ff:febe:6750 dev eth0 weight 1
Preventing them from communicating properly. To fix this, one has to manually do this:
sudo ip -6 ro del default proto ra && sudo netplan apply
Which then give the expected route entries like:
$ ip -6 ro
2620:a:b:21::/64 dev eth0 proto ra metric 1024 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::1 dev eth0 proto ra metric 1024 pref medium
On the other hand, machines using ifupdown (Xenial or Bionic) in the same network segment have no problem keeping only fe80::1 as the default nexthop.
fw01/02 have bond0.21 that is setup to have fe80::1 as the VIP used as the network gateway:
root@fw01:~# ip -6 a show bond0.21 MULTICAST, UP,LOWER_ UP> mtu 1500 state UP qlen 1000 18ff:fe77: b558/64 scope link
8: bond0.21@bond0: <BROADCAST,
inet6 2620:a:b:21::2/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::210:
valid_lft forever preferred_lft forever
# fw02 is currently primary/master MULTICAST, UP,LOWER_ UP> mtu 1500 state UP qlen 1000 18ff:febe: 6750/64 scope link
root@fw02:~# ip -6 a show bond0.21
8: bond0.21@bond0: <BROADCAST,
inet6 2620:a:b:21::3/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::1/64 scope link nodad
valid_lft forever preferred_lft forever
inet6 fe80::210:
valid_lft forever preferred_lft forever
fw01/02 /etc/radvd.conf looks like this:
interface bond0.21
{
AdvSendAdvert on;
MaxRtrAdvInterval 30;
prefix 2620:a:b:21::/64
{
};
};
and radvd only runs on the primary/master fw (02 ATM).
After a failover, Bionic clients using netplan/ systemd- networkd will have a bogus default nexthop like this:
$ ip -6 ro 18ff:fe77: b558 dev eth0 weight 1 18ff:febe: 6750 dev eth0 weight 1
2620:a:b:21::/64 dev eth0 proto ra metric 1024 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default proto ra metric 1024
nexthop via fe80::1 dev eth0 weight 1
nexthop via fe80::210:
nexthop via fe80::210:
Preventing them from communicating properly. To fix this, one has to manually do this:
sudo ip -6 ro del default proto ra && sudo netplan apply
Which then give the expected route entries like:
$ ip -6 ro
2620:a:b:21::/64 dev eth0 proto ra metric 1024 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::1 dev eth0 proto ra metric 1024 pref medium
On the other hand, machines using ifupdown (Xenial or Bionic) in the same network segment have no problem keeping only fe80::1 as the default nexthop.