Charmed OpenStack/OVN does not enable-distributed-floating-ip by default.

Bug #1987250 reported by Giuseppe Petralia
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Neutron API OVN Plugin Charm
Expired
Undecided
Unassigned
charm-ovn-central
Invalid
Undecided
Unassigned
charm-ovn-chassis
Invalid
Undecided
Unassigned

Bug Description

When using charmed OpenStack/OVN to install and configure OVN 20.03 or 22.03, using default configuration the environment is not using DVR but routers' gateways are assigned to specific chassis.
This is confirmed using ovn-nbctl utility to get, set and delete gateways association with chassis.

Currently from doc is not clear how to configure charms ovn-central and ovn-chassis to use DVR and what are current limitations of using it, i.e. is DVR supported in environments using neutron routers with snat disabled?

Is it possible to switch from non DVR to DVR setup? If yes how to achieve that?

description: updated
Revision history for this message
Frode Nordahl (fnordahl) wrote :

Thank you for the report Giuseppe.

There is a broad set of features that was collectively described as DVR for the Neutron ML2/OVS driver, so to answer your question/request we need to dig into what specific features you are interested in.

The instance facing router is absolutely distributed with OVN, and all requests to this router is implemented and serviced by each compute node (ARP/ND, ICMP to router address, Multicast services, DHCP etc).

Inter instance or East/West traffic is also distributed with OVN and traffic flows directly from source to destination instance without flowing through any central point.

In the current Neutron OVN driver implementation, North/South traffic is implemented using gateway chassis in a active/backup style setup for each individual project router where one chassis is the active router and 4 other chassis are selected as backups. I assume this is the bit you are interested in?

N/S traffic to/from instances without a Floating IP (FIP) goes through the gateway chassis, regardless of SNAT being enabled for the project networks router.

There is support for distributing N/S traffic for instances that have a FIP, and this can be enabled with the `enable-distributed-floating-ip` configuration option on the neutron-api-plugin-ovn charm [0].

It is possible to enable this option at any time and Neutron realizes this configuration by updating the NAT entry for the FIP [1].

This is not enabled by default because enabling the option requires external connectivity to be present on every hypervisor, and this is not the case for all deployments. It is also the upstream default is to not enable it.

0: https://charmhub.io/neutron-api-plugin-ovn/configure#enable-distributed-floating-ip
1: https://github.com/openstack/neutron/blob/a0cdb83ff209983fa5f692f69e6390dbe57db0f8/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py#L1113-L1120

Revision history for this message
Giuseppe Petralia (peppepetra) wrote (last edit ):

Thanks Frode for the reply.

Correct I am interested to N/S connectivity for instances with and without FIPs.

Is it possible to switch from the active/backup style setup for the gateways to a DVR style setup where each VM goes out on the same chassis where it is running?

From your comment I understand this is only achievable using FIPs and `enable-distributed-floating-ip` configuration option.

Revision history for this message
Frode Nordahl (fnordahl) wrote (last edit ):

> Is it possible to switch from the active/backup style setup for the gateways to a DVR style setup where each VM goes out on the same chassis where it is running?

In this context, I'm not quite sure what DVR style setup you are referring to. As long as you are not using a FIP there is no transparent way to have routed traffic exit without associating it with the source IP and MAC of the virtual networks router. And as you can imagine, having a single source IP/MAC pop up multiple places in the physical network would cause issues.

Depending on what you want to achieve, your alternatives would be:
* Rewire the physical network to have separate L2 broadcast domains per rack and encode your physical network topology in OpenStack configuration, aka. Routed provider networks [0].
* Set up multiple IP addresses per virtual router and use ECMP routes to spread the load across them. While a setup like this is supported by OVN itself there is currently no support in OpenStack to configure this transparently.

> From your comment I understand this is only achievable using FIPs and `enable-distributed-floating-ip` configuration option.

To have this happen transparently, yes.

0: https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html

Frode Nordahl (fnordahl)
summary: - Charmed OpenStack/OVN is not using DVR by default.
+ Charmed OpenStack/OVN does not enable-distributed-floating-ip by
+ default.
affects: charm-neutron-api → charm-neutron-api-plugin-ovn
Changed in charm-ovn-central:
status: New → Invalid
Changed in charm-ovn-chassis:
status: New → Invalid
Revision history for this message
Frode Nordahl (fnordahl) wrote :

Back to the core of this specific request, should `enable-distributed-floating-ip` be enabled by default? This is not enabled by default because enabling the option requires external connectivity to be present on every hypervisor, and this is not the case for all deployments. It is also the upstream default is to not enable it.

Frode Nordahl (fnordahl)
Changed in charm-neutron-api-plugin-ovn:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack Neutron API OVN Plugin Charm because there has been no activity for 60 days.]

Changed in charm-neutron-api-plugin-ovn:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.