Comment 4 for bug 1461620

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

To understand better if this bug was triggered easy I created the following test case:

I've been using a KVM guest emulating a NUMA environment with 32 different domains (1 for each vCPU):

root@numa:~# numactl -H
available: 32 nodes (0-31)
node 0 cpus: 0
node 0 size: 237 MB
node 0 free: 82 MB
node 1 cpus: 1
node 1 size: 251 MB
node 1 free: 15 MB
node 2 cpus: 2
node 2 size: 251 MB
node 2 free: 52 MB
node 3 cpus: 3
node 3 size: 251 MB
node 3 free: 240 MB
node 4 cpus: 4
node 4 size: 251 MB
node 4 free: 15 MB
node 5 cpus: 5
node 5 size: 251 MB
node 5 free: 15 MB
node 6 cpus: 6
node 6 size: 251 MB
node 6 free: 17 MB
node 7 cpus: 7
node 7 size: 251 MB
node 7 free: 15 MB
node 8 cpus: 8
node 8 size: 251 MB
node 8 free: 16 MB
node 9 cpus: 9
node 9 size: 251 MB
node 9 free: 16 MB
node 10 cpus: 10
node 10 size: 251 MB
node 10 free: 15 MB
node 11 cpus: 11
node 11 size: 187 MB
node 11 free: 13 MB
node 12 cpus: 12
node 12 size: 251 MB
node 12 free: 15 MB
node 13 cpus: 13
node 13 size: 251 MB
node 13 free: 17 MB
node 14 cpus: 14
node 14 size: 251 MB
node 14 free: 15 MB
node 15 cpus: 15
node 15 size: 251 MB
node 15 free: 16 MB
node 16 cpus: 16
node 16 size: 251 MB
node 16 free: 17 MB
node 17 cpus: 17
node 17 size: 251 MB
node 17 free: 17 MB
node 18 cpus: 18
node 18 size: 251 MB
node 18 free: 16 MB
node 19 cpus: 19
node 19 size: 251 MB
node 19 free: 15 MB
node 20 cpus: 20
node 20 size: 251 MB
node 20 free: 16 MB
node 21 cpus: 21
node 21 size: 251 MB
node 21 free: 17 MB
node 22 cpus: 22
node 22 size: 251 MB
node 22 free: 51 MB
node 23 cpus: 23
node 23 size: 251 MB
node 23 free: 37 MB
node 24 cpus: 24
node 24 size: 251 MB
node 24 free: 120 MB
node 25 cpus: 25
node 25 size: 251 MB
node 25 free: 115 MB
node 26 cpus: 26
node 26 size: 251 MB
node 26 free: 41 MB
node 27 cpus: 27
node 27 size: 251 MB
node 27 free: 15 MB
node 28 cpus: 28
node 28 size: 251 MB
node 28 free: 15 MB
node 29 cpus: 29
node 29 size: 251 MB
node 29 free: 17 MB
node 30 cpus: 30
node 30 size: 251 MB
node 30 free: 164 MB
node 31 cpus: 31
node 31 size: 251 MB
node 31 free: 228 MB

And stressing the environment (as you can see in "free memory" for every NUMA node with a specific tool that allocates a certain amount of memory and "touches" every 32 bytes of this memory (and dirtying it at the end, restarting the same behavior). Together with that I'm creating enough kernel tasks concurrent to these memory allocators for them to compete for CPU -> forcing the memory threads to migrate between CPUs (and NUMA domains since every CPU is inside a different NUMA domain).