Jumbo packets being dropped in dpdk bonds
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
charm-ovn-chassis |
Triaged
|
High
|
Unassigned |
Bug Description
During a migration of an queens/ovs openstack cloud to yoga/ovn we noticed that jumbo frames stopped working. This cloud is using DPDK so I'm not sure if this affects non-dpdk too.
All three mtu-related parameters in neutron-api charm are set to 9000 the same as before, but it seems like those values are not being respected after the migration.
Small packets (up to exactly 1500 bytes) go through normally and overall network is working well but jumbo packets just get dropped silently, in both directions (VM to outside and outside to VM).
In this specific scenario there are no routers and VMs are being plugged directly to provider networks (they are set to internal and have dhcp enabled) -- I'm mentioning this only to exclude routers from the equation and make it easier to debug.
From a conversation with Frode Nordahl, it seems like in the old scenario the old neutron-openvswitch charm had a relation with neutron-api where it was getting the MTU settings and configuring them in OVS ports, but the new charm ovn-chassis does not have that relation and therefore is not using those settings to configure ovs ports, which stay in default mtu size of 1500.
I'm going to test (and report back) the suggested workaround which is manually setting the MTU on the ports directly in OVS:
ovs-vsctl set Interface dpdk-xxx mtu_request=9000
Manually changing the MTU values for all dpdk-bondX interfaces did work and now jumbo packets are going through.