commit 90496824c0253d2534f299ebcf5dc00774f70fe7
Author: LIU Yulong <email address hidden>
Date: Wed Jan 30 09:54:52 2019 +0800
Dynamically increase l3 router process queue green pool size
There is a race condition between nova-compute boots instance and
l3-agent processes DVR (local) router in compute node. This issue
can be seen when a large number of instances were booted to one
same host, and instances are under different DVR router. So the
l3-agent will concurrently process all these dvr routers in this
host at the same time.
For now we have a green pool for the router ResourceProcessingQueue
with 8 greenlet, but some of these routers can still be waiting, event
worse thing is that there are time-consuming actions during the router
processing procedure. For instance, installing arp entries, iptables
rules, route rules etc.
So when the VM is up, it will try to get meta via the local proxy
hosting by the dvr router. But the router is not ready yet in that
host. And finally those instances will not be able to setup some
config in the guest OS.
This patch adds a new measurement based on the router quantity to
indicate the L3 router process queue green pool size. The pool size
will be limit from 8 (original value) to 32, because we do not want
the L3 agent cost too much host resource on processing router in the
compute node.
Related-Bug: #1813787
Change-Id: I62393864a103d666d5d9d379073f5fc23ac7d114
(cherry picked from commit 837c9283abd4ccb56d5b4ad0eb1ca435cd2fdf3b)
Reviewed: https:/ /review. opendev. org/728288 /git.openstack. org/cgit/ openstack/ neutron/ commit/ ?id=90496824c02 53d2534f299ebcf 5dc00774f70fe7
Committed: https:/
Submitter: Zuul
Branch: stable/queens
commit 90496824c0253d2 534f299ebcf5dc0 0774f70fe7
Author: LIU Yulong <email address hidden>
Date: Wed Jan 30 09:54:52 2019 +0800
Dynamically increase l3 router process queue green pool size
There is a race condition between nova-compute boots instance and ingQueue
l3-agent processes DVR (local) router in compute node. This issue
can be seen when a large number of instances were booted to one
same host, and instances are under different DVR router. So the
l3-agent will concurrently process all these dvr routers in this
host at the same time.
For now we have a green pool for the router ResourceProcess
with 8 greenlet, but some of these routers can still be waiting, event
worse thing is that there are time-consuming actions during the router
processing procedure. For instance, installing arp entries, iptables
rules, route rules etc.
So when the VM is up, it will try to get meta via the local proxy
hosting by the dvr router. But the router is not ready yet in that
host. And finally those instances will not be able to setup some
config in the guest OS.
This patch adds a new measurement based on the router quantity to
indicate the L3 router process queue green pool size. The pool size
will be limit from 8 (original value) to 32, because we do not want
the L3 agent cost too much host resource on processing router in the
compute node.
Related-Bug: #1813787 66d5d9d379073f5 fc23ac7d114 56d5b4ad0eb1ca4 35cd2fdf3b)
Change-Id: I62393864a103d6
(cherry picked from commit 837c9283abd4ccb