Worker fails reboot recovery due to SRIOV timeout

Bug #1916620 reported by Douglas Henrique Koerich
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Fix Released
Medium
Douglas Henrique Koerich

Bug Description

Brief Description
-----------------
When testing an AIO-SX configuration with modified CPU allocation, with SRIOV enabled and running a large number of pods, it was observed that after unlocking host the system went into a reboot loop due to a timeout failure when applying the worker manifest.

Severity
--------
Major.

Steps to Reproduce
------------------
- Lock host.
- Configure at least 16 CPUs for Platform function;
- Enable and configure an SRIOV interface;
- With increased pod limit, start 400 pods;
- Unlock host.

Expected Behavior
------------------
Reboot after unlock should complete successfully and all pods should be running.

Actual Behavior
----------------
The system went into a reboot loop (2+).

Reproducibility
---------------
Reproducible.

System Configuration
--------------------
One node system (AIO-SX).

Branch/Pull Time/Commit
-----------------------
###
### StarlingX
### Release 20.12
###
### Wind River Systems, Inc.
###

SW_VERSION="20.12"
BUILD_TARGET="Host Installer"
BUILD_TYPE="Formal"
BUILD_ID="2021-02-18_06-00-00"
SRC_BUILD_ID="883"

JOB="StarlingX_Upstream_build"
BUILD_BY="jenkins"
BUILD_NUMBER="883"
BUILD_HOST="yow-cgts4-lx.wrs.com"
BUILD_DATE="2021-02-18 06:04:39 -0500"

Last Pass
---------
N/A.

Timestamp/Logs
--------------
From worker's puppet log:

2021-02-23T14:47:53.280 ^[[0;36mDebug: 2021-02-23 14:47:53 +0000 Exec[Delete sriov device plugin pod if present](provider=posix): Executing check 'kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -n kube-system --selector=app=sriovdp --field-selector spec.nodeName=$(hostname) | grep kube-sriov-device-plugin'^[[0m
2021-02-23T14:47:53.282 ^[[0;36mDebug: 2021-02-23 14:47:53 +0000 Executing: 'kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -n kube-system --selector=app=sriovdp --field-selector spec.nodeName=$(hostname) | grep kube-sriov-device-plugin'^[[0m
2021-02-23T14:47:53.349 ^[[0;36mDebug: 2021-02-23 14:47:53 +0000 /Stage[main]/Platform::Kubernetes::Worker::Sriovdp/Exec[Delete sriov device plugin pod if present]/onlyif: kube-sriov-device-plugin-amd64-ws67f 0/1 Pending 0 51s^[[0m
2021-02-23T14:47:53.351 ^[[0;36mDebug: 2021-02-23 14:47:53 +0000 Exec[Delete sriov device plugin pod if present](provider=posix): Executing 'kubectl --kubeconfig=/etc/kubernetes/admin.conf delete pod -n kube-system --selector=app=sriovdp --field-selector spec.nodeName=$(hostname) --timeout=60s'^[[0m
2021-02-23T14:47:53.353 ^[[0;36mDebug: 2021-02-23 14:47:53 +0000 Executing: 'kubectl --kubeconfig=/etc/kubernetes/admin.conf delete pod -n kube-system --selector=app=sriovdp --field-selector spec.nodeName=$(hostname) --timeout=60s'^[[0m2021-02-23T14:48:53.433 ^[[mNotice: 2021-02-23 14:48:53 +0000 /Stage[main]/Platform::Kubernetes::Worker::Sriovdp/Exec[Delete sriov device plugin pod if present]/returns: pod "kube-sriov-device-plugin-amd64-ws67f" deleted^[[0m
2021-02-23T14:48:53.435 ^[[mNotice: 2021-02-23 14:48:53 +0000 /Stage[main]/Platform::Kubernetes::Worker::Sriovdp/Exec[Delete sriov device plugin pod if present]/returns: error: timed out waiting for the condition on pods/kube-sriov-device-plugin-amd64-ws67f^[[0m
2021-02-23T14:48:53.437 ^[[1;31mError: 2021-02-23 14:48:53 +0000 kubectl --kubeconfig=/etc/kubernetes/admin.conf delete pod -n kube-system --selector=app=sriovdp --field-selector spec.nodeName=$(hostname) --timeout=60s returned 1 instead of one of [0]

Test Activity
-------------
Developer testing.

Workaround
----------
Increase timeout value for SRIOV device plugin deletion (introduced in bug 1900736) at /usr/share/puppet/modules/platform/manifests/kubernetes.pp

CVE References

Changed in starlingx:
assignee: nobody → Douglas Henrique Koerich (dkoerich-wr)
status: New → In Progress
Revision history for this message
Douglas Henrique Koerich (dkoerich-wr) wrote :

I recalled past issues that relate to this problem, and I am listing them below for background reference:

Bug 1850438;
Bug 1885229;
Bug 1896631;

(One relevant comment in the last one above is: "There is a race between the kubernetes processes coming up after the controller manifest is applied and the application of the worker manifest. (...) The fix for this would be quite extensive, requiring the creation of a new AIO, or separate kubernetes manifest to coordinate the bring-up of k8s services and the worker configuration.")

Bug 1900736.

While the final solution of avoiding the race condition is not ready yet, the timeout value will be increased to consider different loading from pods. For better evaluation of that, some measurements will be taken considering:

- Different number and types of pods;
- Different values of timeout.

Ghada Khalil (gkhalil)
tags: added: stx.5.0 stx.networking
Changed in starlingx:
importance: Undecided → Critical
importance: Critical → High
importance: High → Medium
Revision history for this message
Douglas Henrique Koerich (dkoerich-wr) wrote :

The issue is indeed caused by the concurrency in high load between kubelet (launching the pods) and puppet (applying worker manifest), as depicted in the table below, obtained from tests on AIO-SX lab with a small, generic pod:

Table 1: Elapsed time between relevant events vs. number of pods
+-----------------------------------+----------+----------+-----------+
| Event | 100 pods | 200 pods | 300 pods* |
+-----------------------------------+-------- -+----------+-----------+
| Finished with controller manifest | 0 sec | 0 sec | 0 sec |
| Pods gets launched | 26 sec | 23 sec | 27 sec |
| Started with worker manifest | 1 sec | 3 sec | <1 sec |
| Triggered delete of sriovdp | 23 sec | 46 sec | 62 sec |
| sriovdp deleted | 18 sec | 107 sec | 245 sec |
| Finished with worker manifest | 1 sec | 10 sec | 1 sec |
+-----------------------------------+-------- -+----------+-----------+
(*) System got unstable due to heavy load from pods

Revision history for this message
Douglas Henrique Koerich (dkoerich-wr) wrote :

Proposed workaround fix available for review at: https://review.opendev.org/c/starlingx/stx-puppet/+/777587

Changed in starlingx:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to stx-puppet (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792009

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on stx-puppet (f/centos8)

Change abandoned by "Chuck Short <email address hidden>" on branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792009

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to stx-puppet (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792013

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on stx-puppet (f/centos8)

Change abandoned by "Chuck Short <email address hidden>" on branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792013

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to stx-puppet (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792018

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on stx-puppet (f/centos8)

Change abandoned by "Chuck Short <email address hidden>" on branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792018

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to stx-puppet (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792029

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to stx-puppet (f/centos8)
Download full text (48.0 KiB)

Reviewed: https://review.opendev.org/c/starlingx/stx-puppet/+/792029
Committed: https://opendev.org/starlingx/stx-puppet/commit/2b026190a3cb6d561b6ec4a46dfb3add67f1fa69
Submitter: "Zuul (22348)"
Branch: f/centos8

commit 3e3940824dfb830ebd39fd93265b983c6a22fc51
Author: Dan Voiculeasa <email address hidden>
Date: Thu May 13 18:03:45 2021 +0300

    Enable kubelet support for pod pid limit

    Enable limiting the number of pids inside of pods.

    Add a default value to protect against a missing value.
    Default to 750 pids limit to align with service parameter default
    value for most resource consuming StarlingX optional app (openstack).
    In fact any value above service parameter minimum value is good for the
    default.

    Closes-Bug: 1928353
    Signed-off-by: Dan Voiculeasa <email address hidden>
    Change-Id: I10c1684fe3145e0a46b011f8e87f7a23557ddd4a

commit 0c16d288fbc483103b7ba5dad7782e97f59f4e17
Author: Jessica Castelino <email address hidden>
Date: Tue May 11 10:21:57 2021 -0400

    Safe restart of the etcd SM service in etcd upgrade runtime class

    While upgrading the central cloud of a DC system, activation failed
    because there was an unexpected SWACT to controller-1. This was due
    to the etcd upgrade script. Part of this script runs the etcd
    manifest. This triggers a reload/restart of the etcd service. As this
    is done outside of the sm, sm saw the process failure and triggered
    the SWACT.

    This commit modifies platform::etcd::upgrade::runtime puppet class
    to do a safe restart of the etcd SM service and thus, solve the
    issue.

    Change-Id: I3381b6976114c77ee96028d7d96a00302ad865ec
    Signed-off-by: Jessica Castelino <email address hidden>
    Closes-Bug: 1928135

commit eec3008f600aeeb69a42338ed44332228a862d11
Author: Mihnea Saracin <email address hidden>
Date: Mon May 10 13:09:52 2021 +0300

    Serialize updates to global_filter in the AIO manifest

    Right now, looking at the aio manifest:
    https://review.opendev.org/c/starlingx/stx-puppet/+/780600/15/puppet-manifests/src/manifests/aio.pp
    there are 3 classes that update
    in parallel the lvm global_filter:
    - include ::platform::lvm::controller
    - include ::platform::worker::storage
    - include ::platform::lvm::compute
    And this generates some errors.

    We fix this by adding dependencies between the above classes
    in order to update the global_filter in a serial mode.

    Closes-Bug: 1927762
    Signed-off-by: Mihnea Saracin <email address hidden>
    Change-Id: If6971e520454cdef41138b2f29998c036d8307ff

commit 97371409b9b2ae3f0db6a6a0acaeabd74927160e
Author: Steven Webster <email address hidden>
Date: Fri May 7 15:33:43 2021 -0400

    Add SR-IOV rate-limit dependency

    Currently, the binding of an SR-IOV virtual function (VF) to a
    driver has a dependency on platform::networking. This is needed
    to ensure that SR-IOV is enabled (VFs created) before actually
    doing the bind.

    This dependency does not exist for configuring the VF rate-limits
    however. There is a cha...

tags: added: in-f-centos8
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.