performance degradation in placement with large number of resource providers
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
High
|
Chris Dent |
Bug Description
Using today's master, there is a big performance degradation in GET /allocation_
Using a limit does not make any difference, the cost is in generating the original data.
I did some advanced LOG.debug based benchmarking to determine three places where things are a problem, and maybe even fixed the worst one. See the diff below. The two main culprits are ResourceProvide
In the diff I've already changed one of them (the second chunk) to use the data that _build_
The third chunk is because we have a big loop, but I suspect there is some duplication that can be avoided. I have no investigated that closely (yet).
-=-=-
diff --git a/nova/
index 851f9719e4.
--- a/nova/
+++ b/nova/
@@ -3233,6 +3233,8 @@ def _build_
if not summary:
+ # This is _expensive_ when there are a large number of rps.
+ # Building the objects differently may be better.
@@ -3519,8 +3521,7 @@ def _alloc_
rp_uuid = rp_summary.
- ctx, resource_
- rp_uuid),
+ ctx, resource_
@@ -3535,6 +3536,8 @@ def _alloc_
alloc_prov_ids = []
# Let's look into each tree
+ # With many resource providers this takes a long time, but each trip
+ # through the loop is not too bad.
for root_id, alloc_dict in tree_dict.items():
# Get request_groups, which is a list of lists of
# AllocationReque
-=-=-
[1] https:/
tags: | added: rocky-rc-potential |
tags: | removed: rocky-rc-potential |
tags: | added: performance |
Fix proposed to branch: master /review. openstack. org/589941
Review: https:/