NUMA nodes created with memory-backend-ram shows size different than requested

Bug #1912170 reported by Rafał Rudnicki
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
QEMU
Undecided
Unassigned

Bug Description

I created system with 7 NUMA nodes where nodes 0-3 should have 268435456 bytes size and nodes 4-6 exactly 1610612736 bytes size, but when I run "numactl -H" I got different (smaller) sizes.
It is essential for me to be able to emulate a system with nodes of exact size - is it possible?

QEMU version:

QEMU emulator version 5.1.0
Copyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers

QEMU command:

qemu-system-x86_64 -hda qemu-image/ubuntu-1804.img -enable-kvm -cpu Cascadelake-Server -vnc :5 -netdev user,id=net0,hostfwd=tcp::10022-:22 -device virtio-net,netdev=net0 -boot c -m 5632.0M -object memory-backend-ram,id=ram-node0,size=268435456 -numa node,nodeid=0,cpus=0-3,memdev=ram-node0 -object memory-backend-ram,id=ram-node1,size=268435456 -numa node,nodeid=1,cpus=4-7,memdev=ram-node1 -object memory-backend-ram,id=ram-node2,size=268435456 -numa node,nodeid=2,cpus=8-11,memdev=ram-node2 -object memory-backend-ram,id=ram-node3,size=268435456 -numa node,nodeid=3,cpus=12-15,memdev=ram-node3 -object memory-backend-ram,id=ram-node4,size=1610612736 -numa node,nodeid=4,memdev=ram-node4 -object memory-backend-ram,id=ram-node5,size=1610612736 -numa node,nodeid=5,memdev=ram-node5 -object memory-backend-ram,id=ram-node6,size=1610612736 -numa node,nodeid=6,memdev=ram-node6 -numa dist,src=0,dst=0,val=10 -numa dist,src=0,dst=1,val=21 -numa dist,src=0,dst=2,val=31 -numa dist,src=0,dst=3,val=21 -numa dist,src=0,dst=4,val=17 -numa dist,src=0,dst=5,val=38 -numa dist,src=0,dst=6,val=28 -numa dist,src=1,dst=0,val=21 -numa dist,src=1,dst=1,val=10 -numa dist,src=1,dst=2,val=21 -numa dist,src=1,dst=3,val=31 -numa dist,src=1,dst=4,val=28 -numa dist,src=1,dst=5,val=17 -numa dist,src=1,dst=6,val=38 -numa dist,src=2,dst=0,val=31 -numa dist,src=2,dst=1,val=21 -numa dist,src=2,dst=2,val=10 -numa dist,src=2,dst=3,val=21 -numa dist,src=2,dst=4,val=28 -numa dist,src=2,dst=5,val=28 -numa dist,src=2,dst=6,val=28 -numa dist,src=3,dst=0,val=21 -numa dist,src=3,dst=1,val=31 -numa dist,src=3,dst=2,val=21 -numa dist,src=3,dst=3,val=10 -numa dist,src=3,dst=4,val=28 -numa dist,src=3,dst=5,val=28 -numa dist,src=3,dst=6,val=17 -numa dist,src=4,dst=0,val=17 -numa dist,src=4,dst=1,val=28 -numa dist,src=4,dst=2,val=28 -numa dist,src=4,dst=3,val=28 -numa dist,src=4,dst=4,val=10 -numa dist,src=4,dst=5,val=28 -numa dist,src=4,dst=6,val=28 -numa dist,src=5,dst=0,val=38 -numa dist,src=5,dst=1,val=17 -numa dist,src=5,dst=2,val=28 -numa dist,src=5,dst=3,val=28 -numa dist,src=5,dst=4,val=28 -numa dist,src=5,dst=5,val=10 -numa dist,src=5,dst=6,val=28 -numa dist,src=6,dst=0,val=38 -numa dist,src=6,dst=1,val=28 -numa dist,src=6,dst=2,val=28 -numa dist,src=6,dst=3,val=17 -numa dist,src=6,dst=4,val=28 -numa dist,src=6,dst=5,val=28 -numa dist,src=6,dst=6,val=10 -smp 16,sockets=4,dies=1,cores=4,threads=1 -fsdev local,security_model=passthrough,id=fsdev0,path=/home/mysuser/share -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=share_host -daemonize

output from numactl -H:

$ numactl -H
available: 7 nodes (0-6)
node 0 cpus: 0 1 2 3
node 0 size: 250 MB
node 0 free: 191 MB
node 1 cpus: 4 5 6 7
node 1 size: 251 MB
node 1 free: 199 MB
node 2 cpus: 8 9 10 11
node 2 size: 251 MB
node 2 free: 218 MB
node 3 cpus: 12 13 14 15
node 3 size: 251 MB
node 3 free: 118 MB
node 4 cpus:
node 4 size: 1511 MB
node 4 free: 1507 MB
node 5 cpus:
node 5 size: 1447 MB
node 5 free: 1443 MB
node 6 cpus:
node 6 size: 1489 MB
node 6 free: 1484 MB
node distances:
node 0 1 2 3 4 5 6
  0: 10 21 31 21 17 38 28
  1: 21 10 21 31 28 17 38
  2: 31 21 10 21 28 28 28
  3: 21 31 21 10 28 28 17
  4: 17 28 28 28 10 28 28
  5: 38 17 28 28 28 10 28
  6: 38 28 28 17 28 28 10

Revision history for this message
Thomas Huth (th-huth) wrote :

The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

    https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.

Changed in qemu:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for QEMU because there has been no activity for 60 days.]

Changed in qemu:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers