Comment 5 for bug 916085

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

I'm testing this in quantal, and not quite seeing what you mean. When I switch the 'current' memory (using virt-manager) from 512m to 1024m, /proc/meminfo in the guest shows the updated values; likewise when I lower it back to 512m, after a few seconds the guest shows the new values. Of course, that requires the virtio_balloon kernel module to be loaded. To be clear, libvirt does not monitor the host's free memory or automatically lower the 'currently allocated' memory amount. That would need to be done by a separate monitoring tool.

Note that times the kvm process will take more than 512M ram. 512M is what you are allocating to the guest, but not what you are limiting the qemu-kvm process to. Therefore, with 14 guests with 512M ram, you should expect at least 7420M allocated at times by the kvm processes (that's assuming 530M per kvm process, which is the most I've seen in my test today, with guests compiling a kernel image).

I've left 4 VMs (512M allocated, 2048 max memory on a 4G+4G laptop) running a kernel compile for awhile. The guests do not automatically balloon up, so the most they've taken is 530M each. Did your guests automatically balloon up so that /proc/meminfo showed > 512M?

Do you still see the problem you were reporting with precise or quantal?