Comment 4 for bug 1705132

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Prepared 20x1G hugepages
$ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
20

#Default uvtool created guest but added memory to 19G huge pages:
  <memory unit='GiB'>19</memory>
  <currentMemory unit='GiB'>19</currentMemory>
  <memoryBacking>
          <hugepages>
                  <page size='1048576' unit='KiB' nodeset='0'/>
                  <page size='1048576' unit='KiB' nodeset='1'/>
          </hugepages>
  </memoryBacking>

That took about 1 second, not feeling anything more than on a smaller guest.
It really allocated all the memory from the hugepages:
virsh dominfo test-hp-bug-1705132 | grep memory
Max memory: 19922944 KiB
Used memory: 19922944 KiB
$ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages
1
$ sudo cat /proc/$(pidof qemu-system-x86_64)/numa_maps | grep huge
7fc600000000 default file=/dev/hugepages-1048576/libvirt/qemu/qemu_back_mem.pc.ram.JUsDuG\040(deleted) huge anon=19 dirty=19 N0=19 kernelpagesize_kB=1048576

So far it is "not reproducible" for me, but I checked starting time.
1G 0,022s
10G 1,398s
19G 2,154s
I can see how that
a) scales if you go for lets say 250s
b) might get worse on slow cross node numa setups (I had 10G / s, if one has 2G/s that is *5 duration)

But I found my "-m <size>" arg always match my memory spec.
Remember that I pointed out in comment #3 that you had 124G in the -m but 16G in the XML you listed.