resource_versions in agents state reports led to performance degradation

Bug #1567497 reported by Oleg Bondarev
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Fix Released
High
Oleg Bondarev

Bug Description

resource_versions were included into agent state reports recently to support rolling upgrades (commit 97a272a892fcf488949eeec4959156618caccae8)
The downside is that it brought additional processing when handling state reports on server side: update of local resources versions cache and more seriously rpc casts to all other servers to do the same.

All this led to a visible performance degradation at scale with hundreds of agents constantly sending reports. Under load (rally test) agents may start "blinking" which makes cluster very unstable.

Need to optimize agents notifications about resource_versions.

Changed in neutron:
status: New → In Progress
Revision history for this message
Oleg Bondarev (obondarev) wrote :
tags: added: mitaka-backport-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/mitaka)

Fix proposed to branch: stable/mitaka
Review: https://review.openstack.org/304501

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.openstack.org/302792
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=e532ee3fccd0820f9ab0efc417ee787fb8c870e9
Submitter: Jenkins
Branch: master

commit e532ee3fccd0820f9ab0efc417ee787fb8c870e9
Author: Oleg Bondarev <email address hidden>
Date: Thu Apr 7 16:45:52 2016 +0300

    Notify resource_versions from agents only when needed

    resource_versions were included into agent state reports recently to
    support rolling upgrades (commit 97a272a892fcf488949eeec4959156618caccae8)
    The downside is that it brought additional processing when handling state
    reports on server side: update of local resources versions cache and
    more seriously rpc casts to all other servers to do the same.
    All this led to a visible performance degradation at scale with hundreds
    of agents constantly sending reports. Under load (rally test) agents
    may start "blinking" which makes cluster very unstable.

    In fact there is no need to send and update resource_versions in each state
    report. I see two cases when it should be done:
     1) agent was restarted (after it was upgraded);
     2) agent revived - which means that server was not receiving or being able
        to process state reports for some time (agent_down_time). During that
        time agent might be upgraded and restarted.

    So this patch makes agents include resource_versions info only on startup.
    After agent revival server itself will update version_manager with
    resource_versions taken from agent DB record - this is to avoid
    version_manager being outdated.

    Closes-Bug: #1567497
    Change-Id: I47a9869801f4e8f8af2a656749166b6fb49bcd3b

Changed in neutron:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/mitaka)

Reviewed: https://review.openstack.org/304501
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=05a4a34b7e46c2e13a9bd874674804a94f342d0c
Submitter: Jenkins
Branch: stable/mitaka

commit 05a4a34b7e46c2e13a9bd874674804a94f342d0c
Author: Oleg Bondarev <email address hidden>
Date: Thu Apr 7 16:45:52 2016 +0300

    Notify resource_versions from agents only when needed

    resource_versions were included into agent state reports recently to
    support rolling upgrades (commit 97a272a892fcf488949eeec4959156618caccae8)
    The downside is that it brought additional processing when handling state
    reports on server side: update of local resources versions cache and
    more seriously rpc casts to all other servers to do the same.
    All this led to a visible performance degradation at scale with hundreds
    of agents constantly sending reports. Under load (rally test) agents
    may start "blinking" which makes cluster very unstable.

    In fact there is no need to send and update resource_versions in each state
    report. I see two cases when it should be done:
     1) agent was restarted (after it was upgraded);
     2) agent revived - which means that server was not receiving or being able
        to process state reports for some time (agent_down_time). During that
        time agent might be upgraded and restarted.

    So this patch makes agents include resource_versions info only on startup.
    After agent revival server itself will update version_manager with
    resource_versions taken from agent DB record - this is to avoid
    version_manager being outdated.

    Closes-Bug: #1567497
    Change-Id: I47a9869801f4e8f8af2a656749166b6fb49bcd3b
    (cherry picked from commit e532ee3fccd0820f9ab0efc417ee787fb8c870e9)

tags: added: in-stable-mitaka
Revision history for this message
Doug Hellmann (doug-hellmann) wrote : Fix included in openstack/neutron 8.1.0

This issue was fixed in the openstack/neutron 8.1.0 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/314250

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)
Download full text (36.9 KiB)

Reviewed: https://review.openstack.org/314250
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=3bf73801df169de40d365e6240e045266392ca63
Submitter: Jenkins
Branch: master

commit a323769143001d67fd1b3b4ba294e59accd09e0e
Author: Ryan Moats <email address hidden>
Date: Tue Oct 20 15:51:37 2015 +0000

    Revert "Improve performance of ensure_namespace"

    This reverts commit 81823e86328e62850a89aef9f0b609bfc0a6dacd.

    Unneeded optimization: this commit only improves execution
    time on the order of milliseconds, which is less than 1% of
    the total router update execution time at the network node.

    This also

    Closes-bug: #1574881

    Change-Id: Icbcdf4725ba7d2e743bb6761c9799ae436bd953b

commit 7fcf0253246832300f13b0aa4cea397215700572
Author: OpenStack Proposal Bot <email address hidden>
Date: Thu Apr 21 07:05:16 2016 +0000

    Imported Translations from Zanata

    For more information about this automatic import see:
    https://wiki.openstack.org/wiki/Translations/Infrastructure

    Change-Id: I9e930750dde85a9beb0b6f85eeea8a0962d3e020

commit 643b4431606421b09d05eb0ccde130adbf88df64
Author: OpenStack Proposal Bot <email address hidden>
Date: Tue Apr 19 06:52:48 2016 +0000

    Imported Translations from Zanata

    For more information about this automatic import see:
    https://wiki.openstack.org/wiki/Translations/Infrastructure

    Change-Id: I52d7460b3265b5460b9089e1cc58624640dc7230

commit 1ffea42ccdc14b7a6162c1895bd8f2aae48d5dae
Author: OpenStack Proposal Bot <email address hidden>
Date: Mon Apr 18 15:03:30 2016 +0000

    Updated from global requirements

    Change-Id: Icb27945b3f222af1d9ab2b62bf2169d82b6ae26c

commit b970ed5bdac60c0fa227f2fddaa9b842ba4f51a7
Author: Kevin Benton <email address hidden>
Date: Fri Apr 8 17:52:14 2016 -0700

    Clear DVR MAC on last agent deletion from host

    Once all agents are deleted from a host, the DVR MAC generated
    for that host should be deleted as well to prevent a buildup of
    pointless flows generated in the OVS agent for hosts that don't
    exist.

    Closes-Bug: #1568206
    Change-Id: I51e736aa0431980a595ecf810f148ca62d990d20
    (cherry picked from commit 92527c2de2afaf4862fddc101143e4d02858924d)

commit eee9e58ed258a48c69effef121f55fdaa5b68bd6
Author: Mike Bayer <email address hidden>
Date: Tue Feb 9 13:10:57 2016 -0500

    Add an option for WSGI pool size

    Neutron currently hardcodes the number of
    greenlets used to process requests in a process to 1000.
    As detailed in
    http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html

    this can cause requests to wait within one process
    for available database connection while other processes
    remain available.

    By adding a wsgi_default_pool_size option functionally
    identical to that of Nova, we can lower the number of
    greenlets per process to be more in line with a typical
    max database connection pool size.

    DocImpact: a previously unused configuration value
               wsgi_default_pool_size is now used to a...

tags: added: neutron-proactive-backport-potential
Revision history for this message
Doug Hellmann (doug-hellmann) wrote : Fix included in openstack/neutron 9.0.0.0b1

This issue was fixed in the openstack/neutron 9.0.0.0b1 development milestone.

tags: removed: neutron-proactive-backport-potential
tags: removed: mitaka-backport-potential
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.