commit 11cb42f396fdbc1d973e1a1b592c00896f646015
Author: Matt Riedemann <email address hidden>
Date: Fri Jun 28 18:50:33 2019 -0400
Restore RT.old_resources if ComputeNode.save() fails
When starting nova-compute for the first time with a new node,
the ResourceTracker will create a new ComputeNode record in
_init_compute_node but without all of the fields set on the
ComputeNode, for example "free_disk_gb".
Later _update_usage_from_instances will set some fields on the
ComputeNode record (even if there are no instances on the node,
why - I don't know) like free_disk_gb.
This will make the eventual call from _update() to _resource_change()
update the value in the old_resouces dict and return True, and then
_update() will try to update those ComputeNode changes to the database.
If that update fails, for example due to a DBConnectionError, the
value in old_resources will still be for the current version of the node
in memory but not what is actually in the database.
Note that this failure does not result in the compute service failing
to start because ComputeManager._update_available_resource_for_node
traps the Exception and just logs it.
A subsequent trip through the RT._update() method - because of the
update_available_resource periodic task - will call _resource_change
but because old_resource matches the current state of the node, it
returns False and the RT does not attempt to persist the changes to
the DB. _update() will then go on to call _update_to_placement
which will create the resource provider in placement along with its
inventory, making it potentially a candidate for scheduling.
This can be a problem later in the scheduler because the
HostState._update_from_compute_node method may skip setting fields
on the HostState object if free_disk_gb is not set in the
ComputeNode record - which can then break filters and weighers
later in the scheduling process (see bug 1834691 and bug 1834694).
The fix proposed here is simple: if the ComputeNode.save() in
RT._update() fails, restore the previous value in old_resources
so that the subsequent run through _resource_change will compare the
correct state of the object and retry the update.
An alternative to this would be killing the compute service on startup
if there is a DB error but that could have unintended side effects,
especially if the DB error is transient and can be fixed on the next
try.
Obviously the scheduler code needs to be more robust also, but those
improvements are left for separate changes related to the other bugs
mentioned above.
Also, ComputeNode.update_from_virt_driver could be updated to set
free_disk_gb if possible to workaround the tight coupling in the
HostState._update_from_compute_node code, but that's also sort of
a whack-a-mole type change best made separately.
Reviewed: https:/ /review. opendev. org/668263 /git.openstack. org/cgit/ openstack/ nova/commit/ ?id=11cb42f396f dbc1d973e1a1b59 2c00896f646015
Committed: https:/
Submitter: Zuul
Branch: master
commit 11cb42f396fdbc1 d973e1a1b592c00 896f646015
Author: Matt Riedemann <email address hidden>
Date: Fri Jun 28 18:50:33 2019 -0400
Restore RT.old_resources if ComputeNode.save() fails
When starting nova-compute for the first time with a new node, compute_ node but without all of the fields set on the
the ResourceTracker will create a new ComputeNode record in
_init_
ComputeNode, for example "free_disk_gb".
Later _update_ usage_from_ instances will set some fields on the
ComputeNode record (even if there are no instances on the node,
why - I don't know) like free_disk_gb.
This will make the eventual call from _update() to _resource_change()
update the value in the old_resouces dict and return True, and then
_update() will try to update those ComputeNode changes to the database.
If that update fails, for example due to a DBConnectionError, the
value in old_resources will still be for the current version of the node
in memory but not what is actually in the database.
Note that this failure does not result in the compute service failing _update_ available_ resource_ for_node
to start because ComputeManager.
traps the Exception and just logs it.
A subsequent trip through the RT._update() method - because of the available_ resource periodic task - will call _resource_change to_placement
update_
but because old_resource matches the current state of the node, it
returns False and the RT does not attempt to persist the changes to
the DB. _update() will then go on to call _update_
which will create the resource provider in placement along with its
inventory, making it potentially a candidate for scheduling.
This can be a problem later in the scheduler because the _update_ from_compute_ node method may skip setting fields
HostState.
on the HostState object if free_disk_gb is not set in the
ComputeNode record - which can then break filters and weighers
later in the scheduling process (see bug 1834691 and bug 1834694).
The fix proposed here is simple: if the ComputeNode.save() in
RT._update() fails, restore the previous value in old_resources
so that the subsequent run through _resource_change will compare the
correct state of the object and retry the update.
An alternative to this would be killing the compute service on startup
if there is a DB error but that could have unintended side effects,
especially if the DB error is transient and can be fixed on the next
try.
Obviously the scheduler code needs to be more robust also, but those
improvements are left for separate changes related to the other bugs
mentioned above.
Also, ComputeNode. update_ from_virt_ driver could be updated to set _update_ from_compute_ node code, but that's also sort of
free_disk_gb if possible to workaround the tight coupling in the
HostState.
a whack-a-mole type change best made separately.
Change-Id: Id3c847be32d8a1 037722d08bf52e4 b88dc5adc97
Closes-Bug: #1834712