Comment 15 for bug 1813787

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/rocky)

Reviewed: https://review.opendev.org/728287
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=0b9f4f275c681feb9dcbf4bef0ff29d5344f0fdc
Submitter: Zuul
Branch: stable/rocky

commit 0b9f4f275c681feb9dcbf4bef0ff29d5344f0fdc
Author: LIU Yulong <email address hidden>
Date: Wed Jan 30 09:54:52 2019 +0800

    Dynamically increase l3 router process queue green pool size

    There is a race condition between nova-compute boots instance and
    l3-agent processes DVR (local) router in compute node. This issue
    can be seen when a large number of instances were booted to one
    same host, and instances are under different DVR router. So the
    l3-agent will concurrently process all these dvr routers in this
    host at the same time.
    For now we have a green pool for the router ResourceProcessingQueue
    with 8 greenlet, but some of these routers can still be waiting, event
    worse thing is that there are time-consuming actions during the router
    processing procedure. For instance, installing arp entries, iptables
    rules, route rules etc.
    So when the VM is up, it will try to get meta via the local proxy
    hosting by the dvr router. But the router is not ready yet in that
    host. And finally those instances will not be able to setup some
    config in the guest OS.

    This patch adds a new measurement based on the router quantity to
    indicate the L3 router process queue green pool size. The pool size
    will be limit from 8 (original value) to 32, because we do not want
    the L3 agent cost too much host resource on processing router in the
    compute node.

    Conflicts:
        neutron/tests/functional/agent/l3/test_legacy_router.py

    Related-Bug: #1813787
    Change-Id: I62393864a103d666d5d9d379073f5fc23ac7d114
    (cherry picked from commit 837c9283abd4ccb56d5b4ad0eb1ca435cd2fdf3b)