Scheduler doesn't filter out deleted compute node records based on placement RP UUIDs
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Medium
|
Dan Smith | ||
Ocata |
Fix Committed
|
Medium
|
Mohammed Naser | ||
Pike |
Fix Committed
|
Medium
|
Mohammed Naser | ||
Queens |
Fix Committed
|
Medium
|
Mohammed Naser | ||
Rocky |
Fix Committed
|
Medium
|
Mohammed Naser |
Bug Description
If you are taking a nova-compute service out of service permanently, the logical steps would be:
1) Take down the service
2) Delete it from the service list (nova service-delete <uuid>)
However, this does not delete the compute node record which stays forever, leading to the scheduler to always complain about it as well:
2018-09-20 13:15:45.312 131035 WARNING nova.scheduler.
https:/
We should be deleting the compute node if a nova-compute binary is deleted, or that section should automatically clean up while warning (because service records can be rebuilt anyways?)
Changed in nova: | |
assignee: | nobody → Dan Smith (danms) |
Are you sure you're stopping the nova-compute service before deleting the actual service record via the API?
https:/ /developer. openstack. org/api- ref/compute/ #delete- compute- service
Otherwise the ResourceTracker in the compute process will recreate the compute node.
The Service.destroy is called from the API here:
https:/ /github. com/openstack/ nova/blob/ d87852ae6a1987b 6faa3cb5851f975 8b47ef4636/ nova/api/ openstack/ compute/ services. py#L251
Which eventually calls the DB API to delete the associated compute node record:
https:/ /github. com/openstack/ nova/blob/ d87852ae6a1987b 6faa3cb5851f975 8b47ef4636/ nova/db/ sqlalchemy/ api.py# L404