Race during ironic re-balance corrupts local RT ProviderTree and compute_nodes cache

Bug #1841481 reported by Matt Riedemann
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
In Progress
Medium
Matt Riedemann
Ocata
New
Undecided
Unassigned
Pike
New
Undecided
Unassigned
Queens
New
Undecided
Unassigned
Rocky
New
Undecided
Unassigned
Stein
New
Undecided
Unassigned
Train
New
Undecided
Unassigned

Bug Description

Seen with an ironic re-balance in this job:

https://d01b2e57f0a56cb7edf0-b6bc206936c08bb07a5f77cfa916a2d4.ssl.cf5.rackcdn.com/678298/4/check/ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode/92c65ac/

On the subnode we see the RT detect that the node is moving hosts:

Aug 26 18:41:38.818412 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: INFO nova.compute.resource_tracker [None req-a894abee-a2f1-4423-8ede-2a1b9eef28a4 None None] ComputeNode 61dbc9c7-828b-4c42-b19c-a3716037965f moving from ubuntu-bionic-rax-ord-0010443317 to ubuntu-bionic-rax-ord-0010443319

On that new host, the ProviderTree cache is getting updated with refreshed associations for inventory:

Aug 26 18:41:38.881026 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: DEBUG nova.scheduler.client.report [None req-a894abee-a2f1-4423-8ede-2a1b9eef28a4 None None] Refreshing inventories for resource provider 61dbc9c7-828b-4c42-b19c-a3716037965f {{(pid=747) _refresh_associations /opt/stack/nova/nova/scheduler/client/report.py:761}}

aggregates:

Aug 26 18:41:38.953685 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: DEBUG nova.scheduler.client.report [None req-a894abee-a2f1-4423-8ede-2a1b9eef28a4 None None] Refreshing aggregate associations for resource provider 61dbc9c7-828b-4c42-b19c-a3716037965f, aggregates: None {{(pid=747) _refresh_associations /opt/stack/nova/nova/scheduler/client/report.py:770}}

and traits - but when we get traits the provider is gone:

Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager [None req-a894abee-a2f1-4423-8ede-2a1b9eef28a4 None None] Error updating resources for node 61dbc9c7-828b-4c42-b19c-a3716037965f.: ResourceProviderTraitRetrievalFailed: Failed to get traits for resource provider with UUID 61dbc9c7-828b-4c42-b19c-a3716037965f
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager Traceback (most recent call last):
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/compute/manager.py", line 8250, in _update_available_resource_for_node
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager startup=startup)
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/compute/resource_tracker.py", line 715, in update_available_resource
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager self._update_available_resource(context, resources, startup=startup)
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 328, in inner
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager return f(*args, **kwargs)
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/compute/resource_tracker.py", line 738, in _update_available_resource
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager is_new_compute_node = self._init_compute_node(context, resources)
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/compute/resource_tracker.py", line 561, in _init_compute_node
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager if self._check_for_nodes_rebalance(context, resources, nodename):
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/compute/resource_tracker.py", line 516, in _check_for_nodes_rebalance
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager self._update(context, cn)
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1054, in _update
Aug 26 18:41:38.995595 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager self._update_to_placement(context, compute_node, startup)
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 49, in wrapped_f
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager return Retrying(*dargs, **dkw).call(f, *args, **kw)
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 206, in call
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager return attempt.get(self._wrap_exception)
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 247, in get
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager six.reraise(self.value[0], self.value[1], self.value[2])
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 200, in call
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/compute/resource_tracker.py", line 970, in _update_to_placement
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname)
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/scheduler/client/report.py", line 858, in get_provider_tree_and_ensure_root
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid)
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/scheduler/client/report.py", line 666, in _ensure_resource_provider
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager self._refresh_associations(context, uuid_to_refresh, force=True)
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/scheduler/client/report.py", line 778, in _refresh_associations
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager trait_info = self.get_provider_traits(context, rp_uuid)
Aug 26 18:41:38.996935 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager File "/opt/stack/nova/nova/scheduler/client/report.py", line 381, in get_provider_traits
Aug 26 18:41:38.998320 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager raise exception.ResourceProviderTraitRetrievalFailed(uuid=rp_uuid)
Aug 26 18:41:38.998320 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager ResourceProviderTraitRetrievalFailed: Failed to get traits for resource provider with UUID 61dbc9c7-828b-4c42-b19c-a3716037965f
Aug 26 18:41:38.998320 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.manager

That's because back on the original host, it deleted the no-longer-reported-here node:

Aug 26 18:41:38.832749 ubuntu-bionic-rax-ord-0010443317 nova-compute[19290]: INFO nova.compute.manager [None req-d5a9c4b6-f197-4f6c-8b12-8f736bbdb11c None None] Deleting orphan compute node 6 hypervisor host is 61dbc9c7-828b-4c42-b19c-a3716037965f, nodes are set([u'1d23263a-31d4-49d9-ad68-be19219c3bae', u'be80f41d-73ed-46ad-b8e4-cefb0193de36', u'f3c6add0-3eda-47d9-9624-c1f73d488066', u'2c909342-b5dc-4203-b9cb-05a8f29c6c35', u'4921f5d8-8b39-4d03-8423-8e8404128ece'])
Aug 26 18:41:38.962237 ubuntu-bionic-rax-ord-0010443317 nova-compute[19290]: INFO nova.scheduler.client.report [None req-d5a9c4b6-f197-4f6c-8b12-8f736bbdb11c None None] Deleted resource provider 61dbc9c7-828b-4c42-b19c-a3716037965f

Every 60 seconds or so after that, the update_available_resource periodic task on the new host should correct this, but there are a couple of problems - we continue to see that the resource provider is not re-created:

Aug 26 18:42:37.122768 ubuntu-bionic-rax-ord-0010443319 nova-compute[747]: ERROR nova.compute.resource_tracker [None req-ab8d1a0e-385f-4333-bbb5-7b82250968fb None None] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 61dbc9c7-828b-4c42-b19c-a3716037965f: {"errors": [{"status": 404, "request_id": "req-46f04ab5-2bd7-4a13-add9-4b9073587138", "detail": "The resource could not be found.\n\n Resource provider '61dbc9c7-828b-4c42-b19c-a3716037965f' not found: No resource provider with uuid 61dbc9c7-828b-4c42-b19c-a3716037965f found ", "title": "Not Found"}]}: ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 61dbc9c7-828b-4c42-b19c-a3716037965f: {"errors": [{"status": 404, "request_id": "req-46f04ab5-2bd7-4a13-add9-4b9073587138", "detail": "The resource could not be found.\n\n Resource provider '61dbc9c7-828b-4c42-b19c-a3716037965f' not found: No resource provider with uuid 61dbc9c7-828b-4c42-b19c-a3716037965f found ", "title": "Not Found"}]}

First, we don't go through the RT._update flow again because when the new host detects the moved node, it adds it to the compute_nodes dict:

https://github.com/openstack/nova/blob/71478c3eedd95e2eeb219f47460603221ee249b9/nova/compute/resource_tracker.py#L513

And then fails the _update call to get the traits because the old host deleted the provider concurrently:

https://github.com/openstack/nova/blob/71478c3eedd95e2eeb219f47460603221ee249b9/nova/compute/resource_tracker.py#L516

After that, update_available_resource will see the node in RT.compute_nodes already and not call _update:

https://github.com/openstack/nova/blob/71478c3eedd95e2eeb219f47460603221ee249b9/nova/compute/resource_tracker.py#L546

Another issue is that because there was a race between the time the new host was refreshing and adding the resource provider to its local ProviderTree cache:

https://github.com/openstack/nova/blob/71478c3eedd95e2eeb219f47460603221ee249b9/nova/scheduler/client/report.py#L640

and when the old host deleted the provider, the new host ProviderTree cache now has the provider locally but it's actually gone from placement - remember that this is where we failed to get the traits:

https://github.com/openstack/nova/blob/71478c3eedd95e2eeb219f47460603221ee249b9/nova/scheduler/client/report.py#L778

So there seems to be two things to cleanup:

1. If we fail here:

https://github.com/openstack/nova/blob/71478c3eedd95e2eeb219f47460603221ee249b9/nova/compute/resource_tracker.py#L516

We should remove the node from the RT.compute_nodes dict - similar to this fix https://review.opendev.org/#/c/675704/. That will mean we go through RT._update on the next update_available_resource periodic task run.

2. If we fail here:

https://github.com/openstack/nova/blob/71478c3eedd95e2eeb219f47460603221ee249b9/nova/scheduler/client/report.py#L666

We should remove the provider from the local ProviderTree cache so that the next run will find that the provider does not exist and re-create it here:

https://github.com/openstack/nova/blob/71478c3eedd95e2eeb219f47460603221ee249b9/nova/scheduler/client/report.py#L642

Now maybe the "remove from ProviderTree cache on failure" logic there needs to be conditional on whether or not created_rp is None, I'm not sure. It might be best to just always remove the entry from the cache if we got an error trying to refresh associations for it so we're clean next time - that's likely a better question for Eric Fried.

Revision history for this message
Eric Fried (efried) wrote :

Yup, we ran into this race in update_from_provider_tree as well, for which we made _clear_provider_cache_for_tree().

https://github.com/openstack/nova/blob/71478c3eedd95e2eeb219f47460603221ee249b9/nova/scheduler/client/report.py#L1330-L1341

We should invoke same when _refresh_associations fails.

...Possibly *from* _refresh_associations itself.

...Keeping in mind that that guy is recursive :)

Eric Fried (efried)
Changed in nova:
assignee: nobody → Eric Fried (efried)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/678957

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Related fix proposed to branch: master
Review: https://review.opendev.org/678958

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Related fix proposed to branch: master
Review: https://review.opendev.org/678959

Changed in nova:
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.opendev.org/678960

Revision history for this message
Matt Riedemann (mriedem) wrote :

I'm wondering if there is a way to recreate this with a functional test similar to https://review.opendev.org/#/c/675705/ where we'd have two compute services and a single compute node. The node would start on host1 and then we'd update the node to go from host1 to host2 in the DB and run the update_available_resource periodic on host1 which should delete the resource provider, and then run that same periodic on host2 to see if it fails. One thing we'd probably have to stub is injecting the provider into the host2 ProviderTree cache after host1 deletes the provider but before host2 tries to refresh associations for the provider, which is a bit icky but kind of the only way to recreate a race in tests.

Revision history for this message
Matt Riedemann (mriedem) wrote :

I think we can consider this a regression going back to https://review.opendev.org/#/c/526540/ which changed the _get_provider_traits method from returning None to raising an error. That doesn't mean that change was wrong, just that it's probably as early as we can say this is a recreatable (backportable) issue.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/684840

Revision history for this message
Matt Riedemann (mriedem) wrote :

Arguably the #1 fix about removing the entry from the ResourceTracker.compute_nodes dict could go back to Ocata because this is in Ocata:

https://review.opendev.org/#/q/I4253cffca3dbf558c875eed7e77711a31e9e3406

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.opendev.org/684849

Changed in nova:
assignee: Eric Fried (efried) → Matt Riedemann (mriedem)
Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/695188

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.opendev.org/684840
Reason: Let's just go with Mark's series here:

https://review.opendev.org/#/q/topic:bug/1839560+status:open

I've lost context on the bug and fixes and likely not going to drive any of this forward so we'll just go with Mark's changes.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.opendev.org/684849
Reason: Let's just go with Mark's series here:

https://review.opendev.org/#/q/topic:bug/1839560+status:open

I've lost context on the bug and fixes and likely not going to drive any of this forward so we'll just go with Mark's changes.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/nova/+/799327

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.opendev.org/c/openstack/nova/+/799327
Committed: https://opendev.org/openstack/nova/commit/f84d5917c6fb045f03645d9f80eafbc6e5f94bdd
Submitter: "Zuul (22348)"
Branch: master

commit f84d5917c6fb045f03645d9f80eafbc6e5f94bdd
Author: Julia Kreger <email address hidden>
Date: Fri Jul 2 12:10:52 2021 -0700

    [ironic] Minimize window for a resource provider to be lost

    This patch is based upon a downstream patch which came up in discussion
    amongst the ironic community when some operators began discussing a case
    where resource providers had disappeared from a running deployment with
    several thousand baremetal nodes.

    Discussion amongst operators and developers ensued and we were able
    to determine that this was still an issue in the current upstream code
    and that time difference between collecting data and then reconciling
    the records was a source of the issue. Per Arun, they have been running
    this change downstream and had not seen any reoccurances of the issue
    since the patch was applied.

    This patch was originally authored by Arun S A G, and below is his
    original commit mesage.

    An instance could be launched and scheduled to a compute node between
    get_uuids_by_host() call and _get_node_list() call. If that happens
    the ironic node.instance_uuid may not be None but the instance_uuid
    will be missing from the instance list returned by get_uuids_by_host()
    method. This is possible because _get_node_list() takes several minutes to return
    in large baremetal clusters and a lot can happen in that time.

    This causes the compute node to be orphaned and associated resource
    provider to be deleted from placement. Once the resource provider is
    deleted it is never created again until the service restarts. Since
    resource provider is deleted subsequent boots/rebuilds to the same
    host will fail.

    This behaviour is visibile in VMbooter nodes because it constantly
    launches and deletes instances there by increasing the likelihood
    of this race condition happening in large ironic clusters.

    To reduce the chance of this race condition we call _get_node_list()
    first followed by get_uuids_by_host() method.

    Change-Id: I55bde8dd33154e17bbdb3c4b0e7a83a20e8487e8
    Co-Authored-By: Arun S A G <email address hidden>
    Related-Bug: #1841481

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/wallaby)

Related fix proposed to branch: stable/wallaby
Review: https://review.opendev.org/c/openstack/nova/+/799772

Revision history for this message
melanie witt (melwitt) wrote :

This bug seems to have the same root cause as:

https://bugs.launchpad.net/nova/+bug/1853009

which has patches under review, so I'm going to mark this bug as a duplicate of it in an attempt to reduce confusion.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/wallaby)

Reviewed: https://review.opendev.org/c/openstack/nova/+/799772
Committed: https://opendev.org/openstack/nova/commit/0c36bd28ebd05ec0b1dbae950a24a2ecf339be00
Submitter: "Zuul (22348)"
Branch: stable/wallaby

commit 0c36bd28ebd05ec0b1dbae950a24a2ecf339be00
Author: Julia Kreger <email address hidden>
Date: Fri Jul 2 12:10:52 2021 -0700

    [ironic] Minimize window for a resource provider to be lost

    This patch is based upon a downstream patch which came up in discussion
    amongst the ironic community when some operators began discussing a case
    where resource providers had disappeared from a running deployment with
    several thousand baremetal nodes.

    Discussion amongst operators and developers ensued and we were able
    to determine that this was still an issue in the current upstream code
    and that time difference between collecting data and then reconciling
    the records was a source of the issue. Per Arun, they have been running
    this change downstream and had not seen any reoccurances of the issue
    since the patch was applied.

    This patch was originally authored by Arun S A G, and below is his
    original commit mesage.

    An instance could be launched and scheduled to a compute node between
    get_uuids_by_host() call and _get_node_list() call. If that happens
    the ironic node.instance_uuid may not be None but the instance_uuid
    will be missing from the instance list returned by get_uuids_by_host()
    method. This is possible because _get_node_list() takes several minutes to return
    in large baremetal clusters and a lot can happen in that time.

    This causes the compute node to be orphaned and associated resource
    provider to be deleted from placement. Once the resource provider is
    deleted it is never created again until the service restarts. Since
    resource provider is deleted subsequent boots/rebuilds to the same
    host will fail.

    This behaviour is visibile in VMbooter nodes because it constantly
    launches and deletes instances there by increasing the likelihood
    of this race condition happening in large ironic clusters.

    To reduce the chance of this race condition we call _get_node_list()
    first followed by get_uuids_by_host() method.

    Change-Id: I55bde8dd33154e17bbdb3c4b0e7a83a20e8487e8
    Co-Authored-By: Arun S A G <email address hidden>
    Related-Bug: #1841481
    (cherry picked from commit f84d5917c6fb045f03645d9f80eafbc6e5f94bdd)

tags: added: in-stable-wallaby
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/victoria)

Related fix proposed to branch: stable/victoria
Review: https://review.opendev.org/c/openstack/nova/+/800873

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.opendev.org/c/openstack/nova/+/695188
Committed: https://opendev.org/openstack/nova/commit/2bb4527228c8e6fa4a1fa6cfbe80e8790e4e0789
Submitter: "Zuul (22348)"
Branch: master

commit 2bb4527228c8e6fa4a1fa6cfbe80e8790e4e0789
Author: Mark Goddard <email address hidden>
Date: Tue Nov 19 16:51:01 2019 +0000

    Invalidate provider tree when compute node disappears

    There is a race condition in nova-compute with the ironic virt driver
    as nodes get rebalanced. It can lead to compute nodes being removed in
    the DB and not repopulated. Ultimately this prevents these nodes from
    being scheduled to.

    The issue being addressed here is that if a compute node is deleted by a
    host which thinks it is an orphan, then the resource provider for that
    node might also be deleted. The compute host that owns the node might
    not recreate the resource provider if it exists in the provider tree
    cache.

    This change fixes the issue by clearing resource providers from the
    provider tree cache for which a compute node entry does not exist. Then,
    when the available resource for the node is updated, the resource
    providers are not found in the cache and get recreated in placement.

    Change-Id: Ia53ff43e6964963cdf295604ba0fb7171389606e
    Related-Bug: #1853009
    Related-Bug: #1841481

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Reviewed: https://review.opendev.org/c/openstack/nova/+/694802
Committed: https://opendev.org/openstack/nova/commit/a8492e88783b40f6dc61888fada232f0d00d6acf
Submitter: "Zuul (22348)"
Branch: master

commit a8492e88783b40f6dc61888fada232f0d00d6acf
Author: Mark Goddard <email address hidden>
Date: Mon Nov 18 12:06:47 2019 +0000

    Prevent deletion of a compute node belonging to another host

    There is a race condition in nova-compute with the ironic virt driver as
    nodes get rebalanced. It can lead to compute nodes being removed in the
    DB and not repopulated. Ultimately this prevents these nodes from being
    scheduled to.

    The main race condition involved is in update_available_resources in
    the compute manager. When the list of compute nodes is queried, there is
    a compute node belonging to the host that it does not expect to be
    managing, i.e. it is an orphan. Between that time and deleting the
    orphan, the real owner of the compute node takes ownership of it ( in
    the resource tracker). However, the node is still deleted as the first
    host is unaware of the ownership change.

    This change prevents this from occurring by filtering on the host when
    deleting a compute node. If another compute host has taken ownership of
    a node, it will have updated the host field and this will prevent
    deletion from occurring. The first host sees this has happened via the
    ComputeHostNotFound exception, and avoids deleting its resource
    provider.

    Co-Authored-By: melanie witt <email address hidden>

    Closes-Bug: #1853009
    Related-Bug: #1841481

    Change-Id: I260c1fded79a85d4899e94df4d9036a1ee437f02

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/wallaby)

Related fix proposed to branch: stable/wallaby
Review: https://review.opendev.org/c/openstack/nova/+/811807

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Related fix proposed to branch: stable/wallaby
Review: https://review.opendev.org/c/openstack/nova/+/811808

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/victoria)

Related fix proposed to branch: stable/victoria
Review: https://review.opendev.org/c/openstack/nova/+/811812

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Related fix proposed to branch: stable/victoria
Review: https://review.opendev.org/c/openstack/nova/+/811813

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/ussuri)

Related fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/c/openstack/nova/+/811817

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Related fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/c/openstack/nova/+/811818

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/train)

Related fix proposed to branch: stable/train
Review: https://review.opendev.org/c/openstack/nova/+/811823

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Related fix proposed to branch: stable/train
Review: https://review.opendev.org/c/openstack/nova/+/811824

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/wallaby)

Reviewed: https://review.opendev.org/c/openstack/nova/+/811807
Committed: https://opendev.org/openstack/nova/commit/0fc104eeea065579f7fa9b52794d5151baefc84c
Submitter: "Zuul (22348)"
Branch: stable/wallaby

commit 0fc104eeea065579f7fa9b52794d5151baefc84c
Author: Mark Goddard <email address hidden>
Date: Tue Nov 19 16:51:01 2019 +0000

    Invalidate provider tree when compute node disappears

    There is a race condition in nova-compute with the ironic virt driver
    as nodes get rebalanced. It can lead to compute nodes being removed in
    the DB and not repopulated. Ultimately this prevents these nodes from
    being scheduled to.

    The issue being addressed here is that if a compute node is deleted by a
    host which thinks it is an orphan, then the resource provider for that
    node might also be deleted. The compute host that owns the node might
    not recreate the resource provider if it exists in the provider tree
    cache.

    This change fixes the issue by clearing resource providers from the
    provider tree cache for which a compute node entry does not exist. Then,
    when the available resource for the node is updated, the resource
    providers are not found in the cache and get recreated in placement.

    Change-Id: Ia53ff43e6964963cdf295604ba0fb7171389606e
    Related-Bug: #1853009
    Related-Bug: #1841481
    (cherry picked from commit 2bb4527228c8e6fa4a1fa6cfbe80e8790e4e0789)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Reviewed: https://review.opendev.org/c/openstack/nova/+/811808
Committed: https://opendev.org/openstack/nova/commit/cbbca58504275f194ec55eeb89dad4a496d98060
Submitter: "Zuul (22348)"
Branch: stable/wallaby

commit cbbca58504275f194ec55eeb89dad4a496d98060
Author: Mark Goddard <email address hidden>
Date: Mon Nov 18 12:06:47 2019 +0000

    Prevent deletion of a compute node belonging to another host

    There is a race condition in nova-compute with the ironic virt driver as
    nodes get rebalanced. It can lead to compute nodes being removed in the
    DB and not repopulated. Ultimately this prevents these nodes from being
    scheduled to.

    The main race condition involved is in update_available_resources in
    the compute manager. When the list of compute nodes is queried, there is
    a compute node belonging to the host that it does not expect to be
    managing, i.e. it is an orphan. Between that time and deleting the
    orphan, the real owner of the compute node takes ownership of it ( in
    the resource tracker). However, the node is still deleted as the first
    host is unaware of the ownership change.

    This change prevents this from occurring by filtering on the host when
    deleting a compute node. If another compute host has taken ownership of
    a node, it will have updated the host field and this will prevent
    deletion from occurring. The first host sees this has happened via the
    ComputeHostNotFound exception, and avoids deleting its resource
    provider.

    Co-Authored-By: melanie witt <email address hidden>

    Conflicts:
        nova/db/sqlalchemy/api.py

    NOTE(melwitt): The conflict is because change
    I9f414cf831316b624132d9e06192f1ecbbd3dd78 (db: Copy docs from
    'nova.db.*' to 'nova.db.sqlalchemy.*') is not in Wallaby.

    NOTE(melwitt): Differences from the cherry picked change from calling
    nova.db.api => nova.db.sqlalchemy.api directly are due to the alembic
    migration in Xena which looks to have made the nova.db.api interface
    obsolete.

    Closes-Bug: #1853009
    Related-Bug: #1841481

    Change-Id: I260c1fded79a85d4899e94df4d9036a1ee437f02
    (cherry picked from commit a8492e88783b40f6dc61888fada232f0d00d6acf)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/victoria)

Reviewed: https://review.opendev.org/c/openstack/nova/+/800873
Committed: https://opendev.org/openstack/nova/commit/bc5fc2bc688056bc18cf3ae581d8e23592d110da
Submitter: "Zuul (22348)"
Branch: stable/victoria

commit bc5fc2bc688056bc18cf3ae581d8e23592d110da
Author: Julia Kreger <email address hidden>
Date: Fri Jul 2 12:10:52 2021 -0700

    [ironic] Minimize window for a resource provider to be lost

    This patch is based upon a downstream patch which came up in discussion
    amongst the ironic community when some operators began discussing a case
    where resource providers had disappeared from a running deployment with
    several thousand baremetal nodes.

    Discussion amongst operators and developers ensued and we were able
    to determine that this was still an issue in the current upstream code
    and that time difference between collecting data and then reconciling
    the records was a source of the issue. Per Arun, they have been running
    this change downstream and had not seen any reoccurances of the issue
    since the patch was applied.

    This patch was originally authored by Arun S A G, and below is his
    original commit mesage.

    An instance could be launched and scheduled to a compute node between
    get_uuids_by_host() call and _get_node_list() call. If that happens
    the ironic node.instance_uuid may not be None but the instance_uuid
    will be missing from the instance list returned by get_uuids_by_host()
    method. This is possible because _get_node_list() takes several minutes to return
    in large baremetal clusters and a lot can happen in that time.

    This causes the compute node to be orphaned and associated resource
    provider to be deleted from placement. Once the resource provider is
    deleted it is never created again until the service restarts. Since
    resource provider is deleted subsequent boots/rebuilds to the same
    host will fail.

    This behaviour is visibile in VMbooter nodes because it constantly
    launches and deletes instances there by increasing the likelihood
    of this race condition happening in large ironic clusters.

    To reduce the chance of this race condition we call _get_node_list()
    first followed by get_uuids_by_host() method.

    Change-Id: I55bde8dd33154e17bbdb3c4b0e7a83a20e8487e8
    Co-Authored-By: Arun S A G <email address hidden>
    Related-Bug: #1841481
    (cherry picked from commit f84d5917c6fb045f03645d9f80eafbc6e5f94bdd)
    (cherry picked from commit 0c36bd28ebd05ec0b1dbae950a24a2ecf339be00)

tags: added: in-stable-victoria
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/ussuri)

Related fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/c/openstack/nova/+/853540

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/train)

Related fix proposed to branch: stable/train
Review: https://review.opendev.org/c/openstack/nova/+/853546

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/ussuri)

Reviewed: https://review.opendev.org/c/openstack/nova/+/853540
Committed: https://opendev.org/openstack/nova/commit/67be896e0f70ac3f4efc4c87fc03395b7029e345
Submitter: "Zuul (22348)"
Branch: stable/ussuri

commit 67be896e0f70ac3f4efc4c87fc03395b7029e345
Author: Julia Kreger <email address hidden>
Date: Fri Jul 2 12:10:52 2021 -0700

    [ironic] Minimize window for a resource provider to be lost

    This patch is based upon a downstream patch which came up in discussion
    amongst the ironic community when some operators began discussing a case
    where resource providers had disappeared from a running deployment with
    several thousand baremetal nodes.

    Discussion amongst operators and developers ensued and we were able
    to determine that this was still an issue in the current upstream code
    and that time difference between collecting data and then reconciling
    the records was a source of the issue. Per Arun, they have been running
    this change downstream and had not seen any reoccurances of the issue
    since the patch was applied.

    This patch was originally authored by Arun S A G, and below is his
    original commit mesage.

    An instance could be launched and scheduled to a compute node between
    get_uuids_by_host() call and _get_node_list() call. If that happens
    the ironic node.instance_uuid may not be None but the instance_uuid
    will be missing from the instance list returned by get_uuids_by_host()
    method. This is possible because _get_node_list() takes several minutes to return
    in large baremetal clusters and a lot can happen in that time.

    This causes the compute node to be orphaned and associated resource
    provider to be deleted from placement. Once the resource provider is
    deleted it is never created again until the service restarts. Since
    resource provider is deleted subsequent boots/rebuilds to the same
    host will fail.

    This behaviour is visibile in VMbooter nodes because it constantly
    launches and deletes instances there by increasing the likelihood
    of this race condition happening in large ironic clusters.

    To reduce the chance of this race condition we call _get_node_list()
    first followed by get_uuids_by_host() method.

    Change-Id: I55bde8dd33154e17bbdb3c4b0e7a83a20e8487e8
    Co-Authored-By: Arun S A G <email address hidden>
    Related-Bug: #1841481
    (cherry picked from commit f84d5917c6fb045f03645d9f80eafbc6e5f94bdd)
    (cherry picked from commit 0c36bd28ebd05ec0b1dbae950a24a2ecf339be00)

tags: added: in-stable-ussuri
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/train)

Reviewed: https://review.opendev.org/c/openstack/nova/+/853546
Committed: https://opendev.org/openstack/nova/commit/912fabac79ae48a856de9786678846efa01395b1
Submitter: "Zuul (22348)"
Branch: stable/train

commit 912fabac79ae48a856de9786678846efa01395b1
Author: Julia Kreger <email address hidden>
Date: Fri Jul 2 12:10:52 2021 -0700

    [ironic] Minimize window for a resource provider to be lost

    This patch is based upon a downstream patch which came up in discussion
    amongst the ironic community when some operators began discussing a case
    where resource providers had disappeared from a running deployment with
    several thousand baremetal nodes.

    Discussion amongst operators and developers ensued and we were able
    to determine that this was still an issue in the current upstream code
    and that time difference between collecting data and then reconciling
    the records was a source of the issue. Per Arun, they have been running
    this change downstream and had not seen any reoccurances of the issue
    since the patch was applied.

    This patch was originally authored by Arun S A G, and below is his
    original commit mesage.

    An instance could be launched and scheduled to a compute node between
    get_uuids_by_host() call and _get_node_list() call. If that happens
    the ironic node.instance_uuid may not be None but the instance_uuid
    will be missing from the instance list returned by get_uuids_by_host()
    method. This is possible because _get_node_list() takes several minutes to return
    in large baremetal clusters and a lot can happen in that time.

    This causes the compute node to be orphaned and associated resource
    provider to be deleted from placement. Once the resource provider is
    deleted it is never created again until the service restarts. Since
    resource provider is deleted subsequent boots/rebuilds to the same
    host will fail.

    This behaviour is visibile in VMbooter nodes because it constantly
    launches and deletes instances there by increasing the likelihood
    of this race condition happening in large ironic clusters.

    To reduce the chance of this race condition we call _get_node_list()
    first followed by get_uuids_by_host() method.

    Change-Id: I55bde8dd33154e17bbdb3c4b0e7a83a20e8487e8
    Co-Authored-By: Arun S A G <email address hidden>
    Related-Bug: #1841481
    (cherry picked from commit f84d5917c6fb045f03645d9f80eafbc6e5f94bdd)
    (cherry picked from commit 0c36bd28ebd05ec0b1dbae950a24a2ecf339be00)
    (cherry picked from commit 67be896e0f70ac3f4efc4c87fc03395b7029e345)

tags: added: in-stable-train
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (stable/train)

Change abandoned by "Elod Illes <email address hidden>" on branch: stable/train
Review: https://review.opendev.org/c/openstack/nova/+/811824
Reason: stable/train branch of nova projects' have been tagged as End of Life. All open patches have to be abandoned in order to be able to delete the branch.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by "Elod Illes <email address hidden>" on branch: stable/train
Review: https://review.opendev.org/c/openstack/nova/+/811823
Reason: stable/train branch of nova projects' have been tagged as End of Life. All open patches have to be abandoned in order to be able to delete the branch.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (stable/ussuri)

Change abandoned by "Elod Illes <email address hidden>" on branch: stable/ussuri
Review: https://review.opendev.org/c/openstack/nova/+/811818
Reason: stable/ussuri branch of openstack/nova transitioned to End of Life and is about to be deleted. To be able to do that, all open patches need to be abandoned.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by "Elod Illes <email address hidden>" on branch: stable/ussuri
Review: https://review.opendev.org/c/openstack/nova/+/811817
Reason: stable/ussuri branch of openstack/nova transitioned to End of Life and is about to be deleted. To be able to do that, all open patches need to be abandoned.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (stable/victoria)

Change abandoned by "Elod Illes <email address hidden>" on branch: stable/victoria
Review: https://review.opendev.org/c/openstack/nova/+/811812
Reason: stable/victoria branch of openstack/nova is about to be deleted. To be able to do that, all open patches need to be abandoned. Please cherry pick the patch to unmaintained/victoria if you want to further work on this patch.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by "Elod Illes <email address hidden>" on branch: stable/victoria
Review: https://review.opendev.org/c/openstack/nova/+/811813
Reason: stable/victoria branch of openstack/nova is about to be deleted. To be able to do that, all open patches need to be abandoned. Please cherry pick the patch to unmaintained/victoria if you want to further work on this patch.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.