Hello all, While evaluating the new EVPN functionality in the NB BGP driver, I stumbled across some feature limitations. To be honest, I totally forgot about this issue, but it seems that it's still a problem. As OVN / OVS is actually breaking out an L2 circuit to br-ex, we have to reply to ARP and NDP with our own MAC to be able to route the traffic. This is not new and expected. For IPv4, proxy_arp is enabled and answers to all ARP requests, so that we can route all the traffic. In addition, the driver adds the IP of the GW (that was grabbed from DHCP_Options) to the local interface. While I assume that this is not strictly necessary because proxy_arp is enabled anyways, it allows for external reachability of the GW IP as well. For IPv6, the situation is a bit more difficult. The driver configures FRR to do RA announcements, which instructs the VM to add an IP from an LLA prefix to their local interface. Using that, the instance would then use the LLA IP of FRR as their default GW and not the default GW that is configured in OpenStack. In contrast to IPv4, this allows only for reachability of the default GW and not other hosts in the same subnet which reside outside of OVN. In addition, this behavior is neither really transparent to the user nor consistent to IPv4. In my setup, I observed different behaviors of different images. Some, like Rocky 9, seem to honor the gateway configuration from the OpenStack metadata endpoint and install the wrong default route. Some others, like Ubuntu 22.04, seem to ignore that and just use what's advertised as part of the FRR RA announcement. There could be a couple of solutions to these problems: The easiest way would be to add the IP of the public GW to the local interface, just like in IPv4. While I think it's not strictly necessary in IPv4 due to proxy_arp, it would be necessary here. This would assure communication to the default GW, but not to hosts external to OVN from the same VNI. Unfortunately, the agent has no knowledge of the public GW as this information is grabbed from the DHCP_Options table. There, the gateway information exists for IPv4, but not for IPv6. proxy_ndp could also be a solution and is already enabled, but it does not reply to any requests until each address that you want an NA answer for is added by doing: ip -6 neigh add proxy dev This could be done for the default GW if DHCP_Options would carry that default GW but to route all traffic, we would have to do this for each local destination in e.g. a /64. That's not realistic. My idea in the past was to replace all the proxy_arp and proxy_ndp functionality by OVS flows. Unfortunately, the kernel datapath in OVS is missing a critical feature to set the nd_options_type field to allow for crafting a valid ICMPv6 NA packet: https://