vSwitch socket memory with numa node issue

Bug #1858390 reported by Chiinting-Huang
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Fix Released
High
Kristal Dale

Bug Description

Brief Description
-----------------
I have follow the StarlingX 3.0 AIO install guideline.
When I trying to unlock it I get this message.

[sysadmin@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
vSwitch socket memory must be allocated for numa node (0).

Steps to Reproduce
------------------
1.configure the number of 1G Huge pages required on both NUMA nodes
system host-memory-modify controller-0 0 -1G 100
system host-memory-modify controller-0 1 -1G 100

2.Assign OpenStack host labels to controller-0
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 openvswitch=enabled
system host-label-assign controller-0 sriov=enabled

3.To deploy OVS-DPDK
system modify --vswitch_type ovs-dpdk
system host-cpu-modify -f vswitch -p0 1 controller-0

then unlock

Expected Behavior
-----------------
The compute should be on unlock active state without issues

System Configuration
--------------------
Bare Metal All-in-one simplex

Revision history for this message
Ghada Khalil (gkhalil) wrote :

Marking as stx.3.0 since the unlock fails.

This may be a procedural issue. Assigning to networking PL to confirm the required steps and recommend the required changes to the install guide.

tags: added: stx.3.0 stx.config stx.networking
tags: removed: stx.3.0
tags: added: stx.3.0 stx.docs
removed: stx.config
Changed in starlingx:
importance: Undecided → High
status: New → Triaged
assignee: nobody → Le, Huifeng (hle2)
Revision history for this message
Ghada Khalil (gkhalil) wrote :

I subscribed Kristal Dale to this LP. She can help with updating the install guide (if needed).

Le, Huifeng (hle2)
Changed in starlingx:
assignee: Le, Huifeng (hle2) → ChenjieXu (midone)
Revision history for this message
ChenjieXu (midone) wrote :

Hi Huang,

The following command can be used to configure vSwitch memory per NUMA node:
   system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
   i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0
Could you please try the above commands and let us know the result?

The huge pages configured by your commands will be used by VMs which are created by StarlingX:
   system host-memory-modify controller-0 0 -1G 100
   system host-memory-modify controller-0 1 -1G 100

Revision history for this message
ChenjieXu (midone) wrote :

Hi Kristal,

Once the commands are verified by Huang, could you please update the documents? The following sections can be added to the document:
   If vswitch_type is set to OVS-DPDK, configure vSwitch memory per NUMA node:
   system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
   i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0

Revision history for this message
Chiinting-Huang (tad3420077) wrote :
Download full text (5.2 KiB)

Hi Chen,

Following is my result

[sysadmin@controller-0 ~(keystone_admin)]$ system host-memory-modify controller-0 0 -1G 100
+-------------------------------------+--------------------------------------+
| Property | Value |
+-------------------------------------+--------------------------------------+
| Memory: Usable Total (MiB) | 0 |
| Platform (MiB) | 7000 |
| Available (MiB) | 0 |
| Huge Pages Configured | True |
| vSwitch Huge Pages: Size (MiB) | 1024 |
| Total | 0 |
| Available | 0 |
| Required | None |
| Application Pages (4K): Total | None |
| Application Huge Pages (2M): Total | 0 |
| Available | 0 |
| Application Huge Pages (1G): Total | 0 |
| Total Pending | 100 |
| Available | 0 |
| uuid | ba4eb100-03aa-4c8d-a54d-5871b0bf7b58 |
| ihost_uuid | 62ee2379-1d54-4f44-b0b0-75ad83fa145b |
| inode_uuid | 86e377c6-f421-4e0a-8c6a-59a2ec3085d4 |
| created_at | 2020-01-07T03:50:38.070310+00:00 |
| updated_at | 2020-01-07T05:33:07.200999+00:00 |
+-------------------------------------+--------------------------------------+
[sysadmin@controller-0 ~(keystone_admin)]$ system host-memory-modify controller-0 1 -1G 100
+-------------------------------------+--------------------------------------+
| Property | Value |
+-------------------------------------+--------------------------------------+
| Memory: Usable Total (MiB) | 0 |
| Platform (MiB) | 1000 |
| Available (MiB) | 0 |
| Huge Pages Configured | True |
| vSwitch Huge Pages: Size (MiB) | 1024 |
| Total | 0 |
| Available | 0 |
| Required | None |
| Application Pages (4K): Total | None |
| Application Huge Pages (2M): Total | 0 |
| Available | 0 |
| Application Huge Pages (1G): Total | 0 ...

Read more...

Revision history for this message
ChenjieXu (midone) wrote :

Hi Huang,

Could you please use the following command?
   system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
   i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0

Revision history for this message
Chiinting-Huang (tad3420077) wrote :
Download full text (3.5 KiB)

Hi Chen,

system host-memory-modify -f vswitch -1G 1 controller-0 0
+-------------------------------------+--------------------------------------+
| Property | Value |
+-------------------------------------+--------------------------------------+
| Memory: Usable Total (MiB) | 0 |
| Platform (MiB) | 7000 |
| Available (MiB) | 0 |
| Huge Pages Configured | True |
| vSwitch Huge Pages: Size (MiB) | 1024 |
| Total | 0 |
| Available | 0 |
| Required | 1 |
| Application Pages (4K): Total | None |
| Application Huge Pages (2M): Total | 0 |
| Available | 0 |
| Application Huge Pages (1G): Total | 0 |
| Total Pending | 100 |
| Available | 0 |
| uuid | ba4eb100-03aa-4c8d-a54d-5871b0bf7b58 |
| ihost_uuid | 62ee2379-1d54-4f44-b0b0-75ad83fa145b |
| inode_uuid | 86e377c6-f421-4e0a-8c6a-59a2ec3085d4 |
| created_at | 2020-01-07T03:50:38.070310+00:00 |
| updated_at | 2020-01-07T06:02:09.903363+00:00 |
+-------------------------------------+--------------------------------------+

My memory list is changed
[sysadmin@controller-0 ~(keystone_admin)]$ system host-memory-list controller-0
+-----------+---------+------------+---------+----------------+--------+--------+--------+-------+--------+--------+--------+----------+--------+--------+----------+-----------+
| processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_to | app_hp | app_hp | app_hp_p | app_hp | app_hp | app_hp_p | app_hp_us |
| | al(MiB) | rm(MiB) | il(MiB) | configured | size(M | total | avail | _reqd | tal_4K | _total | _avail | ending_2 | _total | _avail | ending_1 | e_1G |
| | | | | | iB) | | | | | _2M | _2M | M | _1G | _1G | G | |
+-----------+---------+------------+---------+----------------+--------+--------+--------+-------+--------+--------+--------+----------+--------+--------+----------+-----------+
| 0 | 0 | 7000 | 0 | True | 1024 | 1 | 0 | 1 | None | 0 | 0 | None | 0 | 0 | 100 | True |
| 1 | 0 | 1000 | 0 | True | 1024 | 0 | 0 | None | None | 0 | 0 | None | 0 ...

Read more...

Revision history for this message
ChenjieXu (midone) wrote :

Hi Huang,

Could you please try following command to allocate 1G hugepage for numa node 1:
system host-memory-modify -f vswitch -1G 1 controller-0 1

Revision history for this message
Chiinting-Huang (tad3420077) wrote :
Download full text (5.0 KiB)

Hi Chen,

system host-memory-modify -f vswitch -1G 1 controller-0 1
+-------------------------------------+--------------------------------------+
| Property | Value |
+-------------------------------------+--------------------------------------+
| Memory: Usable Total (MiB) | 0 |
| Platform (MiB) | 1000 |
| Available (MiB) | 0 |
| Huge Pages Configured | True |
| vSwitch Huge Pages: Size (MiB) | 1024 |
| Total | 0 |
| Available | 0 |
| Required | 1 |
| Application Pages (4K): Total | None |
| Application Huge Pages (2M): Total | 0 |
| Available | 0 |
| Application Huge Pages (1G): Total | 0 |
| Total Pending | 100 |
| Available | 0 |
| uuid | a6a48f19-aa6a-4bbc-86a5-8ebfa534a2fc |
| ihost_uuid | 62ee2379-1d54-4f44-b0b0-75ad83fa145b |
| inode_uuid | 44373052-19dc-40a6-9931-bc8d046c9a97 |
| created_at | 2020-01-07T03:50:38.114011+00:00 |
| updated_at | 2020-01-07T06:26:11.064348+00:00 |
+-------------------------------------+--------------------------------------+

It work! Thx~
[sysadmin@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
+-----------------------+--------------------------------------------+
| Property | Value |
+-----------------------+--------------------------------------------+
| action | none |
| administrative | locked |
| availability | online |
| bm_ip | None |
| bm_type | none |
| bm_username | None |
| boot_device | /dev/disk/by-path/pci-0000:00:17.0-ata-1.0 |
| capabilities | {u'stor_function': u'monitor'} |
| clock_synchronization | ntp |
| config_applied | 4bb04add-b104-47f5-9df4-7cfeec25d52c |
| config_status | None |
| config_target | cbb04add-b104-47f5-9df4-7cfeec25d52c |
| console | tty0 |
| created_at | 2020-01-07T03:50:09.737602+00:00 |
| hostname ...

Read more...

Revision history for this message
ChenjieXu (midone) wrote :

Hi Huang,

No problem! The huge pages configured by your original command are also useful:
   system host-memory-modify controller-0 0 -1G 100

When you create VMs in OVS-DPDK environment, the VMs need to use huge pages to gain networking. You can configure VM to use huge page by adding property "hw:mem_page_size=large" to the flavor which will used by the VM.
   openstack flavor set $i --property hw:mem_page_size=large

Revision history for this message
ChenjieXu (midone) wrote :

Hi Kristal,

As Huang has verified, this bug is a procedural issue. Could you please update the document? The following sections can be added to the document:
   If vswitch_type is set to OVS-DPDK, configure vSwitch memory per NUMA node:
      system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
      i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0
   The VMs created in OVS-DPDK environment needs to use huge pages to gain networking. Configure the huge pages for VMs by following command:
      system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
      i.e. system host-memory-modify compute-0 0 -1G 10

Revision history for this message
Le, Huifeng (hle2) wrote :

Kris,

Could anyone in your team help to update the installation guide according to Chenjie's comments? Thanks much!

Changed in starlingx:
assignee: ChenjieXu (midone) → Kristal Dale (kdale)
Revision history for this message
Kristal Dale (kdale) wrote :

Confirmed - I will make the update to the install guide. Thanks to everyone for the troubleshooting!

I will add everyone following this bug to the review, to confirm the correct location of the additional text.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to docs (master)

Fix proposed to branch: master
Review: https://review.opendev.org/701470

Changed in starlingx:
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to docs (master)

Reviewed: https://review.opendev.org/701470
Committed: https://git.openstack.org/cgit/starlingx/docs/commit/?id=ae1b78e124dbe31ddcabfb421ee683c9ecbd573f
Submitter: Zuul
Branch: master

commit ae1b78e124dbe31ddcabfb421ee683c9ecbd573f
Author: Kristal Dale <email address hidden>
Date: Tue Jan 7 14:35:52 2020 -0800

    Add additional config info for vswitch

    Fix procedural issue in docs by adding additional configurations
    for vswitch memory for numa node and huge pages (Simplex/Duplex)

    Patch set 2: Apply same change to Controller Storage/Dedicated Storage

    Closes-bug: 1858390

    Change-Id: I5a2b11dc7a041812b89f9616e168b9aa0b326156
    Signed-off-by: Kristal Dale <email address hidden>

Changed in starlingx:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.