PMTUd / DF broken in GRE tunnel configuration

Bug #1270646 reported by Robert Collins
20
This bug affects 4 people
Affects Status Importance Assigned to Milestone
tripleo
Fix Released
Critical
James Slagle

Bug Description

PMTUd / DF is broken in our GRE tunnel configuration - slow network performance with ovs 1.10.2, pulled the MTU on the VM down to 1440 and 10MB/s from LA to the UK.

Thats with both the 1.10.2 dkms package from Ubuntu and the 3.11 kernel datapath

Tags: network ovs
Revision history for this message
Robert Collins (lifeless) wrote :

Marking this critical because broken PMTUd causes lots and lots of network issues. Also adding a neutron task as we have followed the stock neutron setup AFAIK so I suspect the problem is deeper than 'configured it wrongly'.

Revision history for this message
Robert Collins (lifeless) wrote :

ubuntu@te-broker:~$ sudo ip link set mtu 1458 dev eth0
ubuntu@te-broker:~$ wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
--2014-01-19 22:22:29-- http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
Resolving cloud-images.ubuntu.com (cloud-images.ubuntu.com)... 91.189.88.141
Connecting to cloud-images.ubuntu.com (cloud-images.ubuntu.com)|91.189.88.141|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 255328256 (244M) [application/octet-stream]
Saving to: ‘precise-server-cloudimg-amd64-disk1.img.10’

 0% [ ] 703,843 358KB/s ^C
ubuntu@te-broker:~$ sudo ip link set mtu 1459 dev eth0
ubuntu@te-broker:~$ wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
--2014-01-19 22:22:36-- http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
Resolving cloud-images.ubuntu.com (cloud-images.ubuntu.com)... 91.189.88.141
Connecting to cloud-images.ubuntu.com (cloud-images.ubuntu.com)|91.189.88.141|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 255328256 (244M) [application/octet-stream]
Saving to: ‘precise-server-cloudimg-amd64-disk1.img.11’

 0% [ ] 5,309 2.09KB/s ^C

That is, 1459 is bad, 1458 works.

Revision history for this message
Robert Collins (lifeless) wrote :

sudo tcpdump -npi eth0 icmp &
[1] 27519
ubuntu@te-broker:~$ tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
sudo tcpdump -npi etwget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
--2014-01-19 22:24:26-- http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
Resolving cloud-images.ubuntu.com (cloud-images.ubuntu.com)... 91.189.88.141
Connecting to cloud-images.ubuntu.com (cloud-images.ubuntu.com)|91.189.88.141|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 255328256 (244M) [application/octet-stream]
Saving to: ‘precise-server-cloudimg-amd64-disk1.img.13’

 0% [ ] 13,751 1.80KB/s eta 38h 4m ^C

No ICMP messages received on eth0 - so no way for pmtud to work - don't know if they aren't being generated, or aren't being received at this point.

Revision history for this message
Robert Collins (lifeless) wrote :
Download full text (3.6 KiB)

bad trace from the qr interface on the network node:
22:28:29.771486 IP 10.0.0.3.60932 > 91.189.88.141.80: Flags [.], ack 5628, win 339, options [nop,nop,TS val 54252874 ecr 2894978433,nop,nop,sack 1 {37989:39396}], length 0
22:28:29.908329 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 5628:9849, ack 1, win 122, options [nop,nop,TS val 2894978501 ecr 54252874], length 4221
22:28:30.281716 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 5628:7035, ack 1, win 122, options [nop,nop,TS val 2894978595 ecr 54252874], length 1407
22:28:30.282208 IP 10.0.0.3.60932 > 91.189.88.141.80: Flags [.], ack 7035, win 329, options [nop,nop,TS val 54253002 ecr 2894978595,nop,nop,sack 1 {37989:39396}], length 0
22:28:30.419068 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 7035:9849, ack 1, win 122, options [nop,nop,TS val 2894978629 ecr 54253002], length 2814
22:28:30.793600 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 7035:8442, ack 1, win 122, options [nop,nop,TS val 2894978723 ecr 54253002], length 1407
22:28:30.794110 IP 10.0.0.3.60932 > 91.189.88.141.80: Flags [.], ack 8442, win 320, options [nop,nop,TS val 54253130 ecr 2894978723,nop,nop,sack 1 {37989:39396}], length 0
22:28:30.930873 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 8442:11256, ack 1, win 122, options [nop,nop,TS val 2894978757 ecr 54253130], length 2814
22:28:31.305748 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 8442:9849, ack 1, win 122, options [nop,nop,TS val 2894978850 ecr 54253130], length 1407
22:28:31.306174 IP 10.0.0.3.60932 > 91.189.88.141.80: Flags [.], ack 9849, win 320, options [nop,nop,TS val 54253258 ecr 2894978850,nop,nop,sack 1 {37989:39396}], length 0
22:28:31.442968 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 9849:12663, ack 1, win 122, options [nop,nop,TS val 2894978885 ecr 54253258], length 2814
22:28:31.817753 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 9849:11256, ack 1, win 122, options [nop,nop,TS val 2894978979 ecr 54253258], length 1407
22:28:31.818246 IP 10.0.0.3.60932 > 91.189.88.141.80: Flags [.], ack 11256, win 320, options [nop,nop,TS val 54253386 ecr 2894978979,nop,nop,sack 1 {37989:39396}], length 0
22:28:31.954990 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 11256:14070, ack 1, win 122, options [nop,nop,TS val 2894979013 ecr 54253386], length 2814
22:28:32.325693 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 11256:12663, ack 1, win 122, options [nop,nop,TS val 2894979106 ecr 54253386], length 1407
22:28:32.326189 IP 10.0.0.3.60932 > 91.189.88.141.80: Flags [.], ack 12663, win 320, options [nop,nop,TS val 54253513 ecr 2894979106,nop,nop,sack 1 {37989:39396}], length 0
22:28:32.462982 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 12663:15477, ack 1, win 122, options [nop,nop,TS val 2894979140 ecr 54253513], length 2814
22:28:32.833764 IP 91.189.88.141.80 > 10.0.0.3.60932: Flags [.], seq 12663:14070, ack 1, win 122, options [nop,nop,TS val 2894979233 ecr 54253513], length 1407
22:28:32.834273 IP 10.0.0.3.60932 > 91.189.88.141.80: Flags [.], ack 14070, win 320, options [nop,nop,TS val 54253640 ecr 2894979233,nop,nop,sack 1 {37989:39396}], length 0
22:28:32.971085 IP 91.189.88.141....

Read more...

Revision history for this message
Robert Collins (lifeless) wrote :
Download full text (3.8 KiB)

good trace:
22:29:40.209264 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 1988084, win 4548, options [nop,nop,TS val 54270483 ecr 2894996076], length 0
22:29:40.209268 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 1992302, win 4614, options [nop,nop,TS val 54270483 ecr 2894996076], length 0
22:29:40.209647 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 2009174, win 4878, options [nop,nop,TS val 54270483 ecr 2894996077], length 0
22:29:40.209853 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 2017610, win 5009, options [nop,nop,TS val 54270483 ecr 2894996077], length 0
22:29:40.209880 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 2019016, win 5031, options [nop,nop,TS val 54270483 ecr 2894996077], length 0
22:29:40.209897 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 2020422, win 5053, options [nop,nop,TS val 54270483 ecr 2894996077], length 0
22:29:40.209899 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 2021828, win 5075, options [nop,nop,TS val 54270483 ecr 2894996077], length 0
22:29:40.209900 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 2023234, win 5097, options [nop,nop,TS val 54270483 ecr 2894996077], length 0
22:29:40.209955 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2069632:2075256, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270449], length 5624
22:29:40.211081 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 2190548, win 5513, options [nop,nop,TS val 54270484 ecr 2894996077], length 0
22:29:40.211148 IP 10.0.0.3.60933 > 91.189.88.141.80: Flags [.], ack 2198984, win 5448, options [nop,nop,TS val 54270484 ecr 2894996077], length 0
22:29:40.211500 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2198984:2200390, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270450], length 1406
22:29:40.211518 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2200390:2203202, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270450], length 2812
22:29:40.211544 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2203202:2206014, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270450], length 2812
22:29:40.211566 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2206014:2208826, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270450], length 2812
22:29:40.211589 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2208826:2211638, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270450], length 2812
22:29:40.211612 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2211638:2213044, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270450], length 1406
22:29:40.211627 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2213044:2215856, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270450], length 2812
22:29:40.211649 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2215856:2218668, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270450], length 2812
22:29:40.211698 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2218668:2224292, ack 1, win 122, options [nop,nop,TS val 2894996077 ecr 54270450], length 5624
22:29:40.211731 IP 91.189.88.141.80 > 10.0.0.3.60933: Flags [.], seq 2224292:2...

Read more...

summary: - PMTUd broken in GRE tunnel configuration
+ PMTUd / DF broken in GRE tunnel configuration
description: updated
Revision history for this message
Robert Collins (lifeless) wrote :

tracepath -n cloud-images.ubuntu.com
22:30:27.641505 IP 10.0.0.1 > 10.0.0.3: ICMP time exceeded in-transit, length 556
 1: 10.0.0.1 4.143ms
 2: no reply
22:30:30.651228 IP 216.115.64.49 > 10.0.0.3: ICMP time exceeded in-transit, length 36
 3: 216.115.64.49 6.141ms
22:30:30.659616 IP 66.209.64.97 > 10.0.0.3: ICMP time exceeded in-transit, length 148
 4: 66.209.64.97 8.251ms
22:30:30.668198 IP 66.209.64.106 > 10.0.0.3: ICMP time exceeded in-transit, length 76
 5: 66.209.64.106 8.425ms
22:30:30.688755 IP 4.71.178.41 > 10.0.0.3: ICMP time exceeded in-transit, length 36
 6: 4.71.178.41 20.539ms
22:30:30.692817 IP 4.69.148.134 > 10.0.0.3: ICMP time exceeded in-transit, length 148
 7: 4.69.148.134 3.990ms
22:30:30.829288 IP 4.69.148.2 > 10.0.0.3: ICMP time exceeded in-transit, length 148
 8: 4.69.148.2 136.337ms asymm 19
22:30:30.965682 IP 4.69.151.182 > 10.0.0.3: ICMP time exceeded in-transit, length 148
 9: 4.69.151.182 136.286ms asymm 19
22:30:31.101295 IP 4.69.132.62 > 10.0.0.3: ICMP time exceeded in-transit, length 148
10: 4.69.132.62 135.502ms asymm 19
22:30:31.238083 IP 4.69.140.189 > 10.0.0.3: ICMP time exceeded in-transit, length 148
11: 4.69.140.189 136.673ms asymm 19
22:30:31.373547 IP 4.69.132.66 > 10.0.0.3: ICMP time exceeded in-transit, length 148
12: 4.69.132.66 135.363ms asymm 19
22:30:31.510225 IP 4.69.135.253 > 10.0.0.3: ICMP time exceeded in-transit, length 148
13: 4.69.135.253 136.567ms asymm 19
22:30:31.644349 IP 4.69.201.49 > 10.0.0.3: ICMP time exceeded in-transit, length 148
14: 4.69.201.49 134.014ms asymm 19
22:30:31.781755 IP 4.69.137.69 > 10.0.0.3: ICMP time exceeded in-transit, length 148
15: 4.69.137.69 137.290ms asymm 19
22:30:31.956156 IP 4.69.153.138 > 10.0.0.3: ICMP time exceeded in-transit, length 148
16: 4.69.153.138 174.287ms asymm 18
22:30:32.096409 IP 4.69.166.49 > 10.0.0.3: ICMP time exceeded in-transit, length 36
17: 4.69.166.49 140.155ms
22:30:32.236986 IP 212.187.138.82 > 10.0.0.3: ICMP time exceeded in-transit, length 36
18: 212.187.138.82 140.448ms
22:30:32.377457 IP 91.189.88.141 > 10.0.0.3: ICMP 91.189.88.141 udp port 44464 unreachable, length 556
19: 91.189.88.141 140.351ms reached
     Resume: pmtu 65535 hops 19 back 50

Revision history for this message
Robert Collins (lifeless) wrote :

Previous comment was with bad MTU on the VM
THis one has a good MTU:
tracepath -n cloud-images.ubuntu.com
22:30:07.580826 IP 10.0.0.1 > 10.0.0.3: ICMP time exceeded in-transit, length 556
 1: 10.0.0.1 2.564ms
 2: no reply
22:30:10.590872 IP 216.115.64.49 > 10.0.0.3: ICMP time exceeded in-transit, length 36
 3: 216.115.64.49 6.092ms
22:30:10.595020 IP 66.209.64.97 > 10.0.0.3: ICMP time exceeded in-transit, length 148
 4: 66.209.64.97 4.016ms
22:30:10.601952 IP 66.209.64.106 > 10.0.0.3: ICMP time exceeded in-transit, length 76
 5: 66.209.64.106 6.768ms
22:30:10.609987 IP 4.71.178.41 > 10.0.0.3: ICMP time exceeded in-transit, length 36
 6: 4.71.178.41 7.967ms
22:30:10.613767 IP 4.69.148.134 > 10.0.0.3: ICMP time exceeded in-transit, length 148
 7: 4.69.148.134 3.706ms
22:30:10.751819 IP 4.69.148.2 > 10.0.0.3: ICMP time exceeded in-transit, length 148
 8: 4.69.148.2 137.996ms asymm 19
22:30:10.888361 IP 4.69.151.182 > 10.0.0.3: ICMP time exceeded in-transit, length 148
 9: 4.69.151.182 136.426ms asymm 19
22:30:11.024820 IP 4.69.132.62 > 10.0.0.3: ICMP time exceeded in-transit, length 148
10: 4.69.132.62 136.400ms asymm 19
22:30:11.161673 IP 4.69.140.189 > 10.0.0.3: ICMP time exceeded in-transit, length 148
11: 4.69.140.189 136.683ms asymm 19
22:30:11.298432 IP 4.69.132.66 > 10.0.0.3: ICMP time exceeded in-transit, length 148
12: 4.69.132.66 136.656ms asymm 19
22:30:11.434138 IP 4.69.135.253 > 10.0.0.3: ICMP time exceeded in-transit, length 148
13: 4.69.135.253 135.546ms asymm 19
22:30:11.570925 IP 4.69.141.17 > 10.0.0.3: ICMP time exceeded in-transit, length 148
14: 4.69.141.17 136.660ms asymm 19
22:30:11.706540 IP 4.69.137.69 > 10.0.0.3: ICMP time exceeded in-transit, length 148
15: 4.69.137.69 135.552ms asymm 19
22:30:11.843219 IP 4.69.153.134 > 10.0.0.3: ICMP time exceeded in-transit, length 148
16: 4.69.153.134 136.529ms asymm 18
22:30:11.985437 IP 4.69.166.49 > 10.0.0.3: ICMP time exceeded in-transit, length 36
17: 4.69.166.49 142.093ms
22:30:12.122212 IP 212.187.138.82 > 10.0.0.3: ICMP time exceeded in-transit, length 36
18: 212.187.138.82 136.674ms
22:30:12.262673 IP 91.189.88.141 > 10.0.0.3: ICMP 91.189.88.141 udp port 44464 unreachable, length 556
19: 91.189.88.141 140.401ms reached
     Resume: pmtu 65535 hops 19 back 50

Revision history for this message
Robert Collins (lifeless) wrote :

pmtud is enabled:
cat /proc/sys/net/ipv4/ip_no_pmtu_disc
0

Revision history for this message
Robert Collins (lifeless) wrote :

11:54 < salv-orlando> I mean if scaling the MTU back to 1440 worked for you we could do that as well. It's
                      just that I would like to understand why that happens. Can you confirm it is related to
                      different endpoints using different ovs releases? or does the issue occur even with both
                      endpoints using the same version of ovs?
11:54 < lifeless> both endpoints are running the same version of ovs
11:55 < lifeless> test matrix:
11:55 < lifeless> 1.10.2 - 1.10.2 - happens
11:55 < lifeless> 2.0.1 - 2.0.1 - happens

Revision history for this message
Robert Collins (lifeless) wrote :

nb: GRO is *on* on the network node interface, so performance may be impacted, but the mtu twiddling having such a dramatic effect points to deeper issues.

Revision history for this message
Robert Collins (lifeless) wrote :

tested with GRO off, had zero impact on performance with the mtu clamped low. with the mtu raised to the 'bad' point, and gro off on the network node, performance was improved (but not better than with the mtu clamped low).

Revision history for this message
Darragh O'Reilly (darragh-oreilly) wrote :

So just to confirm: when you set the mtu to <1459, you are happy with the download speed? And the problem you are reporting is that we are not sending back the "too big, needs fragmentation" ICMP when a received packet is too big for the tunnel? Even if we did, some sites block/ignore these ICMPs, so it is no panacea.

Anyway I think you might have a problem with offloading. Looking at the tcpdumps above, the qr interface received some packets much greater than 1500, eg 2814 (which is 2x1407). These hardly came from the Internet. It seems some offload capability somewhere squashed two packets into one. The "somewhere" is the big question. I'm afraid the newer Ubuntu kernels might try to do it on *any* interface, virtual ones too, but I haven't seen this problem first-hand. Use 'tcpdump -ni $IFACE greater 1600'.

Offloading is bad for routers as it confuses the TCP endpoints and there will be lots of retransmissions. You should try using WireShark as it does TCP analysis and highlights problems in red. You can use tcpdump to capture the headers to a file and import it into WireShark - 'tcpdump -n -s 100 -i $IFACE -w capture.pcap'. Note: for downloaded packets, the tcpdump would be the egress data from an interface, ie after offloading is done.

Revision history for this message
Ghe Rivero (ghe.rivero) wrote :

A solution for this could be to configure all VMs to have a lower MTU by default. This could be accomplish with the option:
dhcp-option-force=26,1458
Doesn't solve the root of the problem, but it works.

Revision history for this message
sowmini (sowmini-varadhan) wrote : Re: [Bug 1270646] Re: PMTUd / DF broken in GRE tunnel configuration

On 1/29/14 5:45 AM, Ghe Rivero wrote:
> A solution for this could be to configure all VMs to have a lower MTU by default. This could be accomplish with the option:
> dhcp-option-force=26,1458
> Doesn't solve the root of the problem, but it works.
>

Question for you about this bug: is this within a single
subnet or across subnets?

--Sowmini

Ghe Rivero (ghe.rivero)
Changed in tripleo:
assignee: nobody → Ghe Rivero (ghe.rivero)
Revision history for this message
Pavel Kirpichyov (pavel-kirpichyov) wrote :

kernel: 3.13.0-18
ovs: 2.0.1+git20140120

Problem exists

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to tripleo-heat-templates (master)

Fix proposed to branch: master
Review: https://review.openstack.org/82803

Changed in tripleo:
assignee: Ghe Rivero (ghe.rivero) → James Slagle (james-slagle)
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to tripleo-image-elements (master)

Fix proposed to branch: master
Review: https://review.openstack.org/82804

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to tripleo-image-elements (master)

Reviewed: https://review.openstack.org/82804
Committed: https://git.openstack.org/cgit/openstack/tripleo-image-elements/commit/?id=1f3d64e5a21f0a9e32d76d0fb1cdf6c68f0b5614
Submitter: Jenkins
Branch: master

commit 1f3d64e5a21f0a9e32d76d0fb1cdf6c68f0b5614
Author: James Slagle <email address hidden>
Date: Tue Mar 25 08:32:33 2014 -0400

    Expose dnsmasq options

    Expose the ability to set dnsmasq options for neutron dhcp agent. This
    will allow us to configure mtu to be 1400 for tenant instances on the
    overcloud. This should help with poor network performance and vm's that
    are just plain unreachable via ssh due to the GRE tunnel overhead.

    This will use a corresponding change to tripleo-heat-templates:
    If24326045987b5a484ba2f71f591092987966536

    Change-Id: I7dba10406a8560b5e177b5a983485d637cca3105
    Partial-Bug: #1270646

Revision history for this message
Dan Prince (dan-prince) wrote :

I'm fine with the TripleO side of this ticket. From a Neutron standpoint lowering the MTU when using GRE does appear to be documented:

http://docs.openstack.org/admin-guide-cloud/content/openvswitch_plugin.html

Given not all Neutron configurations enable GRE I'm not convinced decreasing the MTU as a global default makes sense though...

Revision history for this message
Openstack Gerrit (openstack-gerrit) wrote : Fix merged to tripleo-heat-templates (master)

Reviewed: https://review.openstack.org/82803
Committed: https://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=b8d42f54c6ca193d6b24162d01f3e0a56bd8c6e6
Submitter: Jenkins
Branch: master

commit b8d42f54c6ca193d6b24162d01f3e0a56bd8c6e6
Author: James Slagle <email address hidden>
Date: Tue Mar 25 08:34:29 2014 -0400

    Expose dnsmasq options

    Adds a new parameter, NeutronDnsmasqOptions, to the overcloud template.

    Allows the ability to set dnsmasq options for neutron dhcp agent. This
    will allow us to configure mtu to be 1400 for tenant instances on the
    overcloud. This should help with poor network performance and vm's that
    are just plain unreachable via ssh due to the GRE tunnel overhead.

    The default here has been set to:
    dhcp-option-force=26,1400

    This is the recommended way to configure OpenStack with the Open vSwitch
    plugin per:
    http://docs.openstack.org/admin-guide-cloud/content/openvswitch_plugin.html

    All the documentation I can find on the web (openstack-dev,
    ask.openstack.org, etc), recommend applying this setting. Others have
    reported slow vm performance as well, and this resolves that issue
    (apparently anyway...we'd need to test).

    Change-Id: If24326045987b5a484ba2f71f591092987966536
    Partial-Bug: #1270646

Revision history for this message
James Slagle (james-slagle) wrote :

closing, patches to address this issue have merged. although likely they don't address the true root cause

Changed in tripleo:
status: In Progress → Fix Released
tags: added: ovs
Changed in neutron:
importance: Undecided → Medium
Revision history for this message
Cedric Brandily (cbrandily) wrote :

This bug is > 365 days without activity. We are unsetting assignee and milestone and setting status to Incomplete in order to allow its expiry in 60 days.

If the bug is still valid, then update the bug status.

Changed in neutron:
status: New → Incomplete
no longer affects: neutron
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.