NUMA topology isn't discovering on some HW lab

Bug #1552342 reported by Ksenia Svechnikova on 2016-03-02
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
High
Ivan Ponomarev

Bug Description

9.0 ISO 54

Steps:
1. Prepare env that support NUMA CPU
2. Check numa topology from compute node:
root@bootstrap:~# lstopo --no-caches http://paste.openstack.org/show/489009/
3. Check numa topology in CLI
$ fuel2 node show 2
http://paste.openstack.org/show/489010/
4. Check nailgun log: http://paste.openstack.org/show/489011/

As we can see there is 4 numas on node-2, while they are not discovered in fuel

The reason is in a new hierarchy(Group) in lstopo output for this node:

root@bootstrap:~# lstopo --no-caches
Machine (126GB)
  Group0 L#0 (63GB)
    NUMANode L#0 (P#0 31GB)
      Socket L#0
        Core L#0
          PU L#0 (P#0)
          PU L#1 (P#20)
        Core L#1
          PU L#2 (P#1)
          PU L#3 (P#21)
        Core L#2
          PU L#4 (P#2)
          PU L#5 (P#22)
        Core L#3
          PU L#6 (P#3)
          PU L#7 (P#23)
        Core L#4
          PU L#8 (P#4)
          PU L#9 (P#24)

And this is now it looks on other lab that is successfully parsed:

root@bootstrap:~# lstopo --no-caches
Machine (252GB)
  NUMANode L#0 (P#0 126GB)
    Socket L#0
      Core L#0
        PU L#0 (P#0)
        PU L#1 (P#20)
////

Ksenia Svechnikova (kdemina) wrote :
Changed in fuel:
status: New → Confirmed
Dmitry Pyzhov (dpyzhov) on 2016-03-03
tags: added: team-telco

Fix proposed to branch: master
Review: https://review.openstack.org/288535

Changed in fuel:
assignee: Fuel Telco (fuel-telco-team) → Ivan Ponomarev (ivanzipfer)
status: Confirmed → In Progress

Reviewed: https://review.openstack.org/288535
Committed: https://git.openstack.org/cgit/openstack/fuel-nailgun-agent/commit/?id=3d46f10a74b1852e57761e25be986a16ed1fed9e
Submitter: Jenkins
Branch: master

commit 3d46f10a74b1852e57761e25be986a16ed1fed9e
Author: Ivan Ponomarev <email address hidden>
Date: Fri Mar 4 18:34:16 2016 +0300

    Getting numanode from all possible structure

      - Setup xpath to getting NUMANode

    Closes-Bug: #1552342

    Change-Id: I01361093edd816d210774b1ea9f98c32d4250c4f

Changed in fuel:
status: In Progress → Fix Committed
Ksenia Svechnikova (kdemina) wrote :

Verify on HW lab with NUMAs and cluster on die:

| numa_nodes | [{u'id': 0, u'pcidevs': [u'0000:03:00.0', u'0000:03:00.1', u'0000:00:11.4', u'0000:08:00.0', u'0000:09:00.0', u'0000:09:00.1', u'0000:00:1f.2'], u'cpus': [0, 20, 1, 21, 2, 22, 3, 23, 4, 24], u'memory': 33630732288}, {u'id': 1, u'pcidevs': [], u'cpus': [5, 25, 6, 26, 7, 27, 8, 28, 9, 29], u'memory': 33821622272}, {u'id': 2, u'pcidevs': [u'0000:81:00.0', u'0000:81:00.1'], u'cpus': [10, 30, 11, 31, 12, 32, 13, 33, 14, 34], u'memory': 33821626368}, {u'id': 3, u'pcidevs': [], u'cpus': [15, 35, 16, 36, 17, 37, 18, 38, 19, 39], u'memory': 33821171712}] |
| supported_hugepages | [2048, 1048576]

Changed in fuel:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers