When a nova-compute service dies that is one of several ironic based
nova-compute services running, a node rebalance occurs to ensure there
is still an active nova-compute service dealing with requests for the
given instance that is running.
Today, when this occurs, we create a new ComputeNode entry. This change
alters that logic to detect the case of the ironic node rebalance and in
that case we re-use the existing ComputeNode entry, simply updating the
host field to match the new host it has been rebalanced onto.
Previously we hit problems with placement when we get a new
ComputeNode.uuid for the same ironic_node.uuid. This reusing of the
existing entry keeps the ComputeNode.uuid the same when the rebalance of
the ComputeNode occurs.
Without keeping the same ComputeNode.uuid placement errors out with a 409
because we attempt to create a ResourceProvider that has the same name
as an existing ResourceProvdier. Had that worked, we would have noticed
the race that occurs after we create the ResourceProvider but before we
add back the existing allocations for existing instances. Keeping the
ComputeNode.uuid the same means we simply look up the existing
ResourceProvider in placement, avoiding all this pain and tears.
Reviewed: https:/ /review. openstack. org/527423 /git.openstack. org/cgit/ openstack/ nova/commit/ ?id=e95277fa3eb a86a7290212313d a4dc3c81c286f3
Committed: https:/
Submitter: Zuul
Branch: stable/pike
commit e95277fa3eba86a 7290212313da4dc 3c81c286f3
Author: John Garbutt <email address hidden>
Date: Fri Sep 29 15:48:54 2017 +0100
Re-use existing ComputeNode on ironic rebalance
When a nova-compute service dies that is one of several ironic based
nova-compute services running, a node rebalance occurs to ensure there
is still an active nova-compute service dealing with requests for the
given instance that is running.
Today, when this occurs, we create a new ComputeNode entry. This change
alters that logic to detect the case of the ironic node rebalance and in
that case we re-use the existing ComputeNode entry, simply updating the
host field to match the new host it has been rebalanced onto.
Previously we hit problems with placement when we get a new .uuid for the same ironic_node.uuid. This reusing of the
ComputeNode
existing entry keeps the ComputeNode.uuid the same when the rebalance of
the ComputeNode occurs.
Without keeping the same ComputeNode.uuid placement errors out with a 409 .uuid the same means we simply look up the existing vider in placement, avoiding all this pain and tears.
because we attempt to create a ResourceProvider that has the same name
as an existing ResourceProvdier. Had that worked, we would have noticed
the race that occurs after we create the ResourceProvider but before we
add back the existing allocations for existing instances. Keeping the
ComputeNode
ResourcePro
Closes-Bug: #1714248 58c875eed7e7771 1a31e9e3406 916a8cc364f335f ba8a3a798f)
Co-Authored-By: Dmitry Tantsur <email address hidden>
Change-Id: I4253cffca3dbf5
(cherry picked from commit e3c5e22d1fde7ca