Comment 13 for bug 1413540

Revision history for this message
Ryan Harper (raharper) wrote : Re: issues with KSM enabled for nested KVM VMs

I think you'll find that nested kvm and ksm will be a worst-case scenario w.r.t memory swap-out. KSM actively scans the hosts memory for pages it can unmap , which means when a guest (level 1) or nested guest (level 2) needs to write to that memory, then you incur a page table walk (Guest virtual to host physical) which is quite costly. This can be further affected by overcommit of memory, which ends up having the hypervisor swap memory to disk to handle the working set size. KSM will not have any view into the the page tables, or swapping activity of the guest (l1, or l2) meaning that overtime it's increasingly likely that KSM will have swapped out memory needed for either the L1 or L2 guest. Swapping L1 memory utilized to run the L2 guest is likely to the the most painful since there are two levels of swap-in occuring (host to l1, and the l1 to l2). Swap in/out will be IO intensive, and blocking on io is also more likely to trigger soft-lockups in either l1 or l2 guests.

I suggest looking at the openstack environment and disabling/turning down the amount of memory overcommit to reduce the memory pressure on the host. Given that KSM isn't optimized at all for nested KVM, it's certainly worth disabling KSM when running nested guests (certainly in the L1 guest, possibly the host as well) unless one wants to invest in tuning KSM as well as close monitoring of the host memory pressure any any L1 guest memory pressure if the L1 guest also runs an L2 guest.