MTU problem for external network access
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MicroStack |
New
|
Undecided
|
Unassigned |
Bug Description
In a 2-node Microstack deployment, I followed the instructions to get a proper external network connectivity from: https:/
When a VM is trying to access the external network, a ping of 1470 byte fails, while only around 1400 bytes work. It seems that there is something prevents a standard MTU from working between the VM and the external network.
I can ping fine between different VM with a full frame, the issue is only to the external network.
Should the network_type be "geneve" for the private network? Following the traffic with a tcpdump, the VM is running on 1st node, but traffic leaves to the network on br-ex on the secondary node for some reason.
Could the issue be the local network encapsulation adds extra headers and there's no room for a full 1500 MTU ?
clambert@
+------
| Field | Value |
+------
| admin_state_up | UP |
| availability_
| availability_zones | |
| created_at | 2021-07-
| description | |
| dns_domain | None |
| id | 9c40d4b4-
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| location | cloud='', project.domain_id=, project.
| mtu | 1500 |
| name | public |
| port_security_
| project_id | 6100575b7f95425
| provider:
| provider:
| provider:
| qos_policy_id | None |
| revision_number | 2 |
| router:external | External |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | 19c5c393-
| tags | |
| updated_at | 2021-07-
+------
clambert@
+------
| Field | Value |
+------
| admin_state_up | UP |
| availability_
| availability_zones | |
| created_at | 2021-07-
| description | |
| dns_domain | None |
| id | 5edbde4b-
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_transparent | None |
| location | cloud='', project.domain_id=, project.
| mtu | 1500 |
| name | private |
| port_security_
| project_id | 6100575b7f95425
| provider:
| provider:
| provider:
| qos_policy_id | None |
| revision_number | 2 |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | 680b3913-
| tags | |
| updated_at | 2021-07-
+------
clambert@
+------
| Field | Value |
+------
| admin_state_up | UP |
| availability_
| availability_zones | |
| created_at | 2021-07-
| description | |
| external_
| flavor_id | None |
| id | c9017708-
| interfaces_info | [{"port_id": "3dd3a07a-
| location | cloud='', project.domain_id=, project.
| name | router |
| project_id | 6100575b7f95425
| revision_number | 4 |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2021-07-
+------
clambert@
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
3: docker0 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\ valid_lft forever preferred_lft forever
5: br-ex inet 10.20.20.1/24 scope global br-ex\ valid_lft forever preferred_lft forever
5: br-ex inet 172.16.6.111/24 scope global br-ex\ valid_lft forever preferred_lft forever
5: br-ex inet6 fe80::cce:
6: br-int inet6 fe80::8c9a:
8: genev_sys_6081 inet6 fe80::702d:
clambert@
1: lo: <LOOPBACK,
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,
link/ether 00:0c:29:a4:38:bf brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,
link/ether 02:42:e8:c4:df:33 brd ff:ff:ff:ff:ff:ff
4: ovs-system: <BROADCAST,
link/ether 52:9a:eb:16:85:4a brd ff:ff:ff:ff:ff:ff
5: br-ex: <BROADCAST,
link/ether 00:0c:29:a4:38:bf brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,
link/ether 8e:9a:07:68:01:4f brd ff:ff:ff:ff:ff:ff
8: genev_sys_6081: <BROADCAST,
link/ether 72:2d:b6:4e:4b:28 brd ff:ff:ff:ff:ff:ff
clambert@
8043fc61-
Bridge br-int
fail_mode: secure
Port ovn-1f92da-0
Port br-int
Port patch-br-
Bridge br-ex
Port ens160
Port br-ex
Port patch-provnet-
ovs_version: "2.14.0"
I did more research on this issue... I saw that the default private network created by the Microstack init had a MTU of 1442 instead of 1500. I assume that this is due to the overhead of the Geneve encapsulation between the compute nodes.
Is it possible due to OVN (in a 2-node cluster) that any of the 2 nodes could be the network gateway? Meaning traffic destined for the external network (type flat) with a floating IP, may have to cross to the other node first through the Geneve tunnel in order to exit out on the physical interface?
Is there a supported method with Microstack to be able to support the full 1500 MTU for virtual machines by having a private network interface with jumbo frames? I see 2 possible solutions, decrease the private network MTU or increase the physical MTU, but I think this would require inserting some parameters in the neutron config files to recognize the higher MTU.