Comment 4 for bug 1508907

Revision history for this message
Tobias Urdin (tobias-urdin) wrote :

I'm investigating the resource_tracker.py file in nova/compute folder.
As one can see the passed Instance objects contains the "root_gb" value which is the size of the Cinder volume since we boot our instances with a Cinder volume as root volume/disk.

And in resource_tracker.py _update_usage the local_gb_used is calculated using: self.compute_node.local_gb_used += sign * usage.get('root_gb', 0)

(where sign is 1 in the function prototype)

This means we will effectively add the cinder volumes size as local storage on the hypervisor which gives us the wrong scheduling, "graphs" in horizon and a faulty view on what is actually stored on local disk.

As a quick example, on one of our compute nodes we have a instance with 20 GB disk (a cinder volume attached to /dev/vda) and when dumping the instance object in the for loop in the function _update_usage_from_instances (in resource_tracker.py) we can see that the root_gb is set to 20 so this will then be counted as local disk which is wrong.

A note is that we deploy instances from our own control panel using the Nova API. Perhaps there is some parameters that should not be pushed with to make sure the root_gb is not set?