Comment 17 for bug 1827258

Revision history for this message
Bin Yang (byangintel) wrote :

from compute-1_20190507.124154/var/log/kern.log
    2019-05-06T16:24:12.266 localhost kernel: debug [ 0.000000] On node 0 totalpages: 4174118
    ...
    2019-05-06T17:28:56.749 compute-1 kernel: info [ 1515.471986] Node 0 hugepages_total=1 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
    2019-05-06T17:28:56.749 compute-1 kernel: info [ 1515.471987] Node 0 hugepages_total=6807 hugepages_free=6807 hugepages_surp=0 hugepages_size=2048kB
    ...

from hieradata/192.168.204.77.yaml:
    platform::compute::hugepage::params::vm_2M_pages: '"7024,7172"'
    ...
    platform::compute::params::worker_base_reserved: ("node0:8000MB:1" "node1:2000MB:1")

from puppet.log
    ...
    Exec[Allocate 7024 /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages]
    ...
    Exec[Allocate 7172 /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages]
    ...

The total memory on node 0 is 16GB;

From kenrel log, 2M hugepage is 6807 which is smaller than 7024. It looks system has no 7024 2M pages
Total expected hugepage size is: 7024*2M + 1G = 14.7GB

It is not reasonable. We have several reserved memory resources as below:
    1. 8GB reserved by worker_reserved.conf:
        WORKER_BASE_RESERVED=("node0:8000MB:1" "node1:2000MB:1")
    2. 10% reserved by below code:
        sysinv host.py: vm_hugepages_nr_2M = int(m.vm_hugepages_possible_2M * 0.9)

vm_hugepages_possible_2M is calculated by _inode_get_memory_hugepages() function as below logic:

    node_total_kb = total_hp_mb * SIZE_KB + free_kb + pss_mb * SIZE_KB
        total_hp_mb is 0 since 2M hugepage is not reserved by kernel command line
        free_kb is from /sys/devices/system/node/node0/meminfo
        pss_mb is collected from /proc/*/smaps

    vm_hugepages_possible_2M: node_total_kb - base_mem_mb - vswitch_mem_kb
        base_mem_mb is 8GB from WORKER_BASE_RESERVED in worker_reserved.conf
        vswitch_mem_kb is 1GB from COMPUTE_VSWITCH_MEMORY in worker_reserved.conf

So far, the vm_hugepages_possible_2M is always corrected on Shanghai bare metal tests.

Could reporters help to provide more info while this bug is triggered?
1. run system host-memory-list <compute node>
2. run cat /proc/sys/vm/overcommit_* #the mode will impact the free_kb calculation
3. run cat /proc/*/smaps 2>/dev/null | awk '/^Pss:/ {a += $2;} END {printf "%d\n", a/1024.0;}' on compute nodes
4. run cat /sys/devices/system/node/node*/meminfo on compute nodes

thanks,
Bin