Resource tracker double counts vcpus_used
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
High
|
Paul Murray |
Bug Description
The resource tracker is double counting the number of vcpu used.
It is believe this behavior was introduce in: https:/
That patch moved the vcpu accounting from the extensible resource tracker plugin into the main resource tracker code.
This behavior is at present openstack/nova master commit 4fb0da175d62084
To reproduce:
* Set up devstack (e.g. simple single machine setup)
* boot an instance
* initially 'nova hypervisor-show devstack' will show the correct vcpu used count
* wait for compute manager periodic task to run, updating the available resources
* now 'nova hypervisor-show devstack' will show double the number of used vcpu count
Changed in nova: | |
importance: | Undecided → High |
Changed in nova: | |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | none → mitaka-1 |
The hypervisor is returning its own view of the number of vcpus used. This is copied to the compute_node record and is not reset before adding the resource tracker view. Code to reset the memory and disk is included in _update_ usage_from_ instances( ) as below. The extensible resource plugins are called here as well. Now the vcpu resource has gone and the accounting is directly in the compute_node object the vcpus_used should be reset to 0 here.
def _update_ usage_from_ instances( self, context, instances):
"""Calculate resource usage based on instance utilization. This is
self.tracked_ instances. clear()
different than the hypervisor's view as it will account for all
instances assigned to the local compute host, even if they are not
currently powered on.
"""
# set some initial values, reserve room for host/hypervisor:
self.compute_ node.local_ gb_used = CONF.reserved_ host_disk_ mb / 1024
self.compute_ node.memory_ mb_used = CONF.reserved_ host_memory_ mb
self.compute_ node.free_ ram_mb = (self.compute_ node.memory_ mb -
self. compute_ node.memory_ mb_used)
self.compute_ node.free_ disk_gb = (self.compute_ node.local_ gb -
self.compute_ node.local_ gb_used)
self.compute_ node.current_ workload = 0
self.compute_ node.running_ vms = 0
# Reset values for extended resources
self.ext_ resources_ handler. reset_resources (self.compute_ node,
self. driver)