data-port and bridge-mapping config changes do not remove stale configurations from OVS database

Bug #1706705 reported by Drew Freiberger
20
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Neutron Open vSwitch Charm
Triaged
Wishlist
Unassigned

Bug Description

When performing a change to neutron-openvswitch to remove a bridge-mapping and data-port configuration (to backout from an improper OVS configuration) it was discovered that the charm does not remove the affected bridge. It would be handy if the charm either logged that the configuration change requires a database clearing and restart of OVS, or that the charm have an action to clean and re-build the OVS database on a given unit.

Steps to re-create:
1. setup neutron with GRE tunneling for overlay networking with eth0 having an L3 IP and eth1 being a provider trunk for vlan tagging with bindings "data: overlay-space"
2. set neutron-openvswitch data-port to 'br-data:eth0 br-provider:eth1' and bridge-mappings to 'physnet1:br-data physnet2:br-provider'.
Note that br-data now doesn't allow overlay traffic between nodes because the ovs bridge captures the overlay traffic before it makes it to eth0's L3 IP interface.
3. Backout the change to n-ovs such that data-port is 'br-provider:eth1' and bridge-mappings is 'physnet2:br-provider'.
4. witness that br-data bridge is still existant in OVS database with 'os-vsctl show'.

The confusion happened in thinking that when specifying bridge-mappings and data-port for provider networking that we'd have to manually configure the data overlay as we didn't understand exactly how bindings:data: worked, so we ended up adding br-data as we had in other sites where br-data was necessary because of using vlan overlay. Obviously, GRE doesn't require L2 bridge for overlay networking, but backing out of this change using juju config didn't update the OVS database to the prior state.

It was identified by one of our architects that changes to OVS charm require deleting the OVS database and restarting OVS/neutron-ovs-agent:

sudo systemctl stop neutron-openvswitch-agent
sudo systemctl disable neutron-openvswitch-agent
sudo systemctl stop openvswitch-switch
sudo rm -rf /var/log/openvswitch/*
sudo rm -rf /etc/openvswitch/conf.db
sudo systemctl start openvswitch-switch
sudo systemctl enable neutron-openvswitch-agent
sudo systemctl start neutron-openvswitch-agent
sudo ovs-vsctl show

charmers release 17.02/Xenial/Mitaka

Revision history for this message
James Page (james-page) wrote :

Inspection of previous configuration values should be possible so that automatic removal of devices from bridges could occur when configuration changes; its made slight tricky that in order to restore the previous state, we also need to understand whether an ifdown/ifup cycle needs to be performed on the nic.

In terms of manually recovering from this problem - you should have been able to use:

  sudo ovs-vsctl del-port br-data eth0

which will remove eth0 as mapped to the br-data bridge back to the host OS.

Changed in charm-neutron-openvswitch:
status: New → Triaged
importance: Undecided → Wishlist
Revision history for this message
Alvaro Uria (aluria) wrote :

This bug also affects a change on "os-data-network", which will reconfigure new tunnels on br-tun, but not delete the ones from the old "os-data-network".

Revision history for this message
Andrea Ieri (aieri) wrote :

For future travelers: the recreation of the ovs db as a workaround for this bug did not work properly for me. After the restart of the services I was left with only br-int and had to re-add br-data manually and stop/start instances to get tap ports back and return to a healthy state.

I have found the manual deletion of the "old" ports/bridges to be a lot simpler and more robust.
In my specific case this ended up being:

juju run -a neutron-openvswitch -- 'sudo sh -c "ovs-vsctl del-br br-mgmt; ovs-vsctl del-port int-br-mgmt"'

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.