Comment 5 for bug 2017889

Revision history for this message
Maximilian Sesterhenn (msnatepg) wrote :

While working on this a minor problem occurred and I think it's best to discuss this first:

I've now reached a phase in development where I'm able to ping instances through an EVPN fabric on a public provider network.

However, the return path (from the VM to internet) is problematic.
As in the BGP driver, br-ex has proxy_arp / proxy_ndp activated to answer the ARP / NDP queries of the instances for their next-hop.
For proxy_arp this works fine as it answers all requests, proxy_ndp however seems to require explicit configuration for each IP.

Now this depends on the configuration on the instances:
If there is a default route which routes the traffic without a next-hop through eth0 of the instance, the instance will send neighbor solicitations for the destination ip.
Its not really rational to add each and every possible target into the proxy_ndp configuration and I wasn't successful finding some catch-all logic yet.

A different approach would be to configure a gateway into that subnet in neutron, that way we would have a known next hop that we could add into the proxy_ndp configuration.
My tests adding that manually were successful.

Unfortunately, we don't have that information in ovn-bgp-agent to my knowledge.
One solution I could think of is to add that information to the external_ids fields in networking-bgpvpn.

I guess thats not a problem today in the EVPN driver because its routed first through the OpenStack router.
I wonder how that works today in the BGP driver, shouldn't it have the same problems?
Maybe it's just too late :)

What do you think? Maybe you have an idea for some kind of catch-all logic in proxy_ndp?