OVS agent loop slowdown

Bug #1253993 reported by Salvatore Orlando
44
This bug affects 5 people
Affects Status Importance Assigned to Milestone
neutron
Fix Released
High
Brian Haley
Havana
Fix Released
Undecided
Unassigned
Icehouse
Fix Released
Undecided
Unassigned

Bug Description

This is probably a regression of bug #1194438

From several checks with parallel jobs, it seems agent loop becomes slower across iterations.
Slowness is not due to plugin RPC calls, which might be optimized anyway, but is mainly due to applying iptables/ovs configurations.

This consistently occurs in tempest tests with parallelism enabled.

Example logs here: http://logs.openstack.org/20/57420/5/experimental/check-tempest-devstack-vm-neutron-isolated-parallel/54c6db3/logs

1 - OVS AGENT - Iteration #33 starts at 1:36:03
2 - Nova API - Server POST request at 1.39.42
3 - Neutron Server - Neutron POST /ports at 1.39.44
4 - OVS Agent OVS DB monitor detects instance's tap a 1.39.45
5 - OVS Agent - Iteration #33 completes processing device filters (security groups) at 1.39.51
6 - Nova API - Server ACTIVE by at 1.39.55
7 - Neutron Server - Floating IP POST at 1.39.55
8 - Neutron L3/VPN Agent - Floating IP ready at 1.40.37 (42 seconds???? - this should be another investigation)
9 - OVS Agent - Iteration #33 on OVS agent complete processing devices at 1.40.16
NOTE: The added device was not processed because the iteration started before the device was detected
10 - TEMPEST - TIMEOUT ON TEST at 1.40.56 - connection failed because internal port not wired
11 - OVS Agent - Iteration #33 completes processing ancillary ports at 1:42:07
12 - OVS Agent - Iteration #34 starts at 1:42:08
13 - OVS Agent - Iteration #34 completes processing device filters at 1:43:35
14 - The wiring of the interface for the server is not captured by the logs as the tempest test completed in the meanwhile.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

The cause for the massive amount of time needed to complete a loop is the number of calls sent from the neutron server which need to be handled.

In some cases about 1,000 incoming requests, which resulted in about 1,500 calls to neutron-server from the agent, were observed in a single tempest run (isolated and parallel).

In particular calls for security group updates and port updates trigger refresh_firewall which is a rather expensive call.
In some cases even 20 threads were concurrently running refresh firewall; all these threads synchronize on a semaphore for iptables.

This number is currently being brought down in https://review.openstack.org/#/c/57420, by:
- ensuring messages are sent from the server to the client only when really necessary
- reworking message handling in the agent by reacting to notification in the main rpc loop rather then immediately once the message is received, thus avoiding concurrent execution of methods which will end up doing exactly the same changes to iptables
- grouping calls from the agent to the server where possible (e.g.: send a single request for device details instead of a request for each device)

Leveraging threads or external processes for tasks which do not have to be synchronous with port processing is also currently being evaluated.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

FIY: I am accruing fixes for this bug in review.openstack.org/57420

Changed in neutron:
status: New → Triaged
importance: Undecided → High
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/61964

Changed in neutron:
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/63100

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/63558

Changed in neutron:
assignee: Salvatore Orlando (salvatore-orlando) → Amir Sadoughi (amir-sadoughi)
Changed in neutron:
assignee: Amir Sadoughi (amir-sadoughi) → nobody
Maru Newby (maru)
Changed in neutron:
assignee: nobody → Salvatore Orlando (salvatore-orlando)
Changed in neutron:
milestone: none → icehouse-2
Thierry Carrez (ttx)
Changed in neutron:
milestone: icehouse-2 → icehouse-3
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.openstack.org/63100
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=3046c4ae22b10f9e4fa83a47bfe089554d4a4681
Submitter: Jenkins
Branch: master

commit 3046c4ae22b10f9e4fa83a47bfe089554d4a4681
Author: Salvatore Orlando <email address hidden>
Date: Thu Dec 19 00:26:38 2013 -0800

    Improve handling of security group updates

    Currently updates to security group rules or membership
    are handled by immediately triggering a call to refresh_firewall.
    This call is quite expensive, and it is often executed with a
    very high frequency.

    With this patch, the notification handler simply adds devices for
    which the firewall should be refreshed to a set, which will then
    be processed in another routine. The latter is supposed to
    be called in the main agent loop.

    This patch for 'provider updates' simply sets a flag for refreshing
    the firewall for all devices.

    In order to avoid breaking other agents leveraging the security
    group RPC mixin, the reactive behaviour is still available, and is
    still the default way of handling security group updates.

    Partial-Bug: #1253993
    Partially implements blueprint: neutron-tempest-parallel

    Change-Id: I1574544734865506ff5383404516cc9349c16ec4

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Even if things are much better now, there are still cases in which a loop takes a consistent amount of time.
In this case for instance there were almost 70 seconds for processing 12 devices (almost 6 seconds per device).

Logs: http://logs.openstack.org/94/67994/3/check/check-tempest-dsvm-neutron-full/88959e7/logs/screen-q-agt.txt.gz#_2014-02-21_19_01_57_886

No gate failure has been observed because of this, but under load failure in the gate might happen.
This can also cause instances to boot without network configuration as the DHCPDISCOVER messages might go unanswered because of this delay in wiring.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Feel free to take ownership of this bug if you find some other way to speed up the agent loop!

Revision history for this message
Sudhakar Gariganti (sudhakar-gariganti) wrote :

Apart from the neturon server side RPCs, we observed in our scale tests that IPtables Manager _apply method, takes significant amount of time, as the number of devices for which the firewall refresh is needed increases. Especially the "_modify_rules" method .
Using list comprehensions in this method is adding to the delay. I have modified that particular code and saw good improvements.
Will try posting the changes for review at the earliest.

Changed in neutron:
assignee: Salvatore Orlando (salvatore-orlando) → Sudhakar (sudhakar-gariganti)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/77549

Revision history for this message
Sudhakar Gariganti (sudhakar-gariganti) wrote :

https://review.openstack.org/77549

Please review and provide comments.

Changed in neutron:
status: In Progress → Fix Committed
Revision history for this message
Akihiro Motoki (amotoki) wrote :

"Fix Committed" means the patch is megred. While the patch is under review, status should be "In Progress".

Changed in neutron:
status: Fix Committed → In Progress
Revision history for this message
Sudhakar Gariganti (sudhakar-gariganti) wrote :

Thanks for correcting. Will keep in this in mind for the next time.

Changed in neutron:
assignee: Sudhakar (sudhakar-gariganti) → Brian Haley (brian-haley)
Thierry Carrez (ttx)
Changed in neutron:
milestone: icehouse-3 → icehouse-rc1
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Consider moving out of RC-1 and reducing it to medium.
Current gate conditions reveal no failure attributable to agent loop slowdown.

The patch under review is, in my opinion, not strictly connected to this bug.

Changed in neutron:
milestone: icehouse-rc1 → none
tags: added: icehouse-rc-potential
Revision history for this message
Sudhakar Gariganti (sudhakar-gariganti) wrote :

There were concerns around the patch ‘https://review.openstack.org/#/c/77549/’ being properly tested as it involves logic change.

We have done significant testing with this patch and want to share few results from our experiments.

We were basically trying to see how many VMs we can scale with the OVS agent in use. With default security groups(which has remote security group), beyond 250-300 VMs, VMs were not able to get DHCP IPs. We were having 16 CNs, with VMs uniformly distributed across them. The VM image had a wait period of 120 secs to receive the DHCP response.
By the time we have around 18-19 VMs on each CN(there were around 6k Iptable rules), each RPC loop was taking close to 140 seconds(if there is any update). And the reason VMs were not getting IPs was that the Iptable rules required for the VM to send out the DHCP request were not in place before the 120 secs wait period. Upon further investigations we discovered that the "for loop searching iptable rules" in _modify_rules method of iptables_manger.py is eating a big chunk of the overall time spent.

After this patch, we were able to see close to 680 VMs were able to get IPs. The number of Iptable rules at this point was close to 20K, with around 40 VMs per CN.

To summarize, we were able to increase the processing capability of compute node from 6K Iptable rules to 20K Iptable rules, which helped more VMs get DHCP IP within the 120 sec wait period. You can imagine the situation when the wait time is less than 120 secs.

Carl and Salvatore opined that we should create a new bug/enhancement and link the patch to that. Makes sense as this patch doesn't solve this bug completely. But at the same time, I want to state that, this patch is not something totally unrelated to this bug. :)

Changed in neutron:
milestone: none → juno-1
Thierry Carrez (ttx)
tags: added: icehouse-backport-potential
removed: icehouse-rc-potential
Revision history for this message
Robert Pothier (rpothier) wrote :

Can this be added to Havana?

In Havana, Bug #1260791 is seen at scale.
I have proposed bring in a partial of this bug into Havana.
This fixes the issue with the neutron agents flapping continuously in Havana.

https://review.openstack.org/#/c/86062/

Thierry Carrez (ttx)
tags: added: havana-backport-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (master)

Reviewed: https://review.openstack.org/77549
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=0c202ab3e453e38c09f04978e4fce30d6ee6350c
Submitter: Jenkins
Branch: master

commit 0c202ab3e453e38c09f04978e4fce30d6ee6350c
Author: Sudhakar <email address hidden>
Date: Mon Mar 3 15:35:20 2014 +0530

    Improve iptables_manager _modify_rules() method

    As the number of ports per default security group increases, the
    number of iptables entries on the Compute Node grows. Because of
    this, there is a gradual increase in the time taken to apply chains
    and rules.

    Currently we are using list comprehensions to find if a new chain or
    rule matches an existing one. Instead, walk through the list in
    reverse to find a matching entry.

    Added a new method, _find_last_entry(), to return the entry we are
    searching for.

    Change-Id: I3585479ffa00be556b8b21dc9dbd6b36ad37f4de
    Closes-Bug: #1302272
    Related-Bug: #1253993

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/icehouse)

Related fix proposed to branch: stable/icehouse
Review: https://review.openstack.org/98692

Kyle Mestery (mestery)
Changed in neutron:
milestone: juno-1 → none
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/icehouse)

Reviewed: https://review.openstack.org/98692
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=6fd5a20132e1327913c0a0099781a903f37c3ee9
Submitter: Jenkins
Branch: stable/icehouse

commit 6fd5a20132e1327913c0a0099781a903f37c3ee9
Author: Sudhakar <email address hidden>
Date: Mon Mar 3 15:35:20 2014 +0530

    Improve iptables_manager _modify_rules() method

    As the number of ports per default security group increases, the
    number of iptables entries on the Compute Node grows. Because of
    this, there is a gradual increase in the time taken to apply chains
    and rules.

    Currently we are using list comprehensions to find if a new chain or
    rule matches an existing one. Instead, walk through the list in
    reverse to find a matching entry.

    Added a new method, _find_last_entry(), to return the entry we are
    searching for.

    Change-Id: I3585479ffa00be556b8b21dc9dbd6b36ad37f4de
    Closes-Bug: #1302272
    Related-Bug: #1253993
    (cherry picked from commit 0c202ab3e453e38c09f04978e4fce30d6ee6350c)

tags: added: in-stable-icehouse
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/havana)

Reviewed: https://review.openstack.org/86062
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ac971fdf6aefdb5bdb1854c2112ed17df5f9e871
Submitter: Jenkins
Branch: stable/havana

commit ac971fdf6aefdb5bdb1854c2112ed17df5f9e871
Author: Robert Pothier <email address hidden>
Date: Tue Apr 8 11:25:11 2014 -0400

    Improve handling of security group updates

    Currently updates to security group rules or membership
    are handled by immediately triggering a call to refresh_firewall.
    This call is quite expensive, and it is often executed with a
    very high frequency.

    With this patch, the notification handler simply adds devices for
    which the firewall should be refreshed to a set, which will then
    be processed in another routine. The latter is supposed to
    be called in the main agent loop.

    This patch for 'provider updates' simply sets a flag for refreshing
    the firewall for all devices.

    In order to avoid breaking other agents leveraging the security
    group RPC mixin, the reactive behaviour is still available, and is
    still the default way of handling security group updates.

    Patching into Havana as this fixes the issue of the agents flapping

    Partial-Bug: #1253993
    Partially implements blueprint: neutron-tempest-parallel

    Cherry-picked from 5e6e592132aa9a98936ce3bfdb66efc7832caafb
                       3046c4ae22b10f9e4fa83a47bfe089554d4a4681
                       26d8231b0bf4050453bcb6122484c801eff852ca

    Change-Id: Idc682f19ebfef4c00b6bb510ebd7d77775f4bbca

tags: added: in-stable-havana
Changed in neutron:
status: In Progress → Fix Released
milestone: none → juno-3
Revision history for this message
Chen Ya Qin (yqchen) wrote :

Optimization on _modify_rules in patch https://review.openstack.org/#/c/77549/ did shorten time , but still the time is not short enough.
I found the time to finish the _apply_synchronized takes more than half of hour(36 minutes in my environment) when the number of active vms in cloud is 982.
This will lead the time to bring port up when booting an instance will last very long. Optimazation on _modify_rules need be done to fit the situation of Large-scale deployment.
I report a bug in https://bugs.launchpad.net/neutron/+bug/1352826.

Revision history for this message
Sudhakar Gariganti (sudhakar-gariganti) wrote :

Hi Chen,

You are correct. Basically, for large deployments it is better we get around the default security group (which is cyclic) and use a custom security group with appropriate to derive the same fuctionality as that of a default security group.

Also, introduction of IpSet might bring in good results in this area. Do look at this blueprint https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security which is working on the same.

Thanks,
Sudhakar

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/havana)

Related fix proposed to branch: stable/havana
Review: https://review.openstack.org/112700

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/havana)

Reviewed: https://review.openstack.org/112700
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=3a7df5c819eb234e8ef6590b35c3f2016bd5489f
Submitter: Jenkins
Branch: stable/havana

commit 3a7df5c819eb234e8ef6590b35c3f2016bd5489f
Author: Sudhakar <email address hidden>
Date: Mon Mar 3 15:35:20 2014 +0530

    Improve iptables_manager _modify_rules() method

    As the number of ports per default security group increases, the
    number of iptables entries on the Compute Node grows. Because of
    this, there is a gradual increase in the time taken to apply chains
    and rules.

    Currently we are using list comprehensions to find if a new chain or
    rule matches an existing one. Instead, walk through the list in
    reverse to find a matching entry.

    Added a new method, _find_last_entry(), to return the entry we are
    searching for.

    Change-Id: I3585479ffa00be556b8b21dc9dbd6b36ad37f4de
    Closes-Bug: #1302272
    Related-Bug: #1253993
    (cherry picked from commit 0c202ab3e453e38c09f04978e4fce30d6ee6350c)

Thierry Carrez (ttx)
Changed in neutron:
milestone: juno-3 → 2014.2
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.