Comment 3 for bug 1402502

Revision history for this message
melanie witt (melwitt) wrote :

Hi Jin,

I researched into this based on your comment and found some interesting things. From a technical, virt-level standpoint, you are right that resources on the hypervisor can be freed after suspending an instance. I think it's hypervisor dependent, but in the case of libvirt, I did observe that the hypervisor nova-compute.log showed vcpus decrease by 1 when I suspended a tiny instance.

However, I still could not schedule an extra instance on the hypervisor with cpu_allocation_ratio = 1.0 like you said. I found this is because the nova scheduler (which uses the resource tracker) computes resource usage based on data like, how many instances are on the hypervisor, what size are they, etc and does *not* use the values reported by the hypervisor. There are likely many reasons for this, some of which I think are if you scheduled the extra instance while one was suspended, and you resume the suspended one, you can immediately get into an overcommit situation you didn't intend, unless you migrate the extra instance, etc. Another reason might be race conditions between what the nova db knows and what the hypervisor sees.

So, it appears the bug here is in the user documentation, unfortunately. I will update this bug report as a Wishlist item for desired change in behavior, and add the documentation project so that can be fixed.