Comment 1 for bug 1373159

Revision history for this message
Michael Turek (mjturek) wrote :

So after a bit more investigating, I have a better understanding of what the consequences of specifying cell memory in MiB rather than the expected KiB.

When using qemu-2.1.0:
The feature simply does not work. Machines with NUMA specs that should boot, fail at the libvirt/qemu level and go to error. This happens regardless of whether cell memory is specified or is using the default of equally distributing the memory across the cells.

When using qemu-2.0.0 (or lower):
Machines boot, but with the wrong NUMA topology. For example, with either of the following extra_specs:
{"hw:numa_policy": "strict", "hw:numa_mem.1": "2048", "hw:numa_mem.0": "6144", "hw:numa_nodes": "2", "hw:numa_cpus.0": "0,1,2", "hw:numa_cpus.1": "3"}
{{"hw:numa_policy": "strict", "hw:numa_nodes": "2"}

The following topology is found on the guest:
node 0 cpus: 0 1 2 3
node 0 size: 7986 MB
node 0 free: 7568 MB
node distances:
node 0
  0: 10

The quick fix that Tiago and I tried produces the following topology, which is the intended behavior:

When extra specs are{"hw:numa_policy": "strict", "hw:numa_nodes": "2"}

node 0 cpus: 0 1
node 0 size: 3955 MB
node 0 free: 3728 MB
node 1 cpus: 2 3
node 1 size: 4031 MB
node 1 free: 3846 MB
node distances:
node 0 1
  0: 10 20
  1: 20 10

When extra_specs are {"hw:numa_policy": "strict", "hw:numa_mem.1": "2048", "hw:numa_mem.0": "6144", "hw:numa_nodes": "2", "hw:numa_cpus.0": "0,1,2", "hw:numa_cpus.1": "3"}

available: 2 nodes (0-1)
node 0 cpus: 0 1 2
node 0 size: 5971 MB
node 0 free: 5587 MB
node 1 cpus: 3
node 1 size: 2015 MB
node 1 free: 1983 MB
node distances:
node 0 1
  0: 10 20
  1: 20 10

So in short, the feature is not working as intended and once qemu-2.1.0 becomes more common, it will be broken. I'll be proposing a fix later today for this.