Comment 0 for bug 2031058

Revision history for this message
Luan Nunes Utimura (lutimura) wrote :

Brief Description
-----------------
Recently, it has been observed that, on systems with multiple controller nodes, `clients` pods on stand-by controllers are failing to initialize due to the absence of their respective working directories.

Severity
--------
Major.

Steps to Reproduce
------------------
On a system with multiple controller nodes:
1) Upload/apply stx-openstack;
2) Verify that `clients` pods on stand-by controllers aren't initializing.

Expected Behavior
------------------
All `clients` pods should be running.

Actual Behavior
----------------
Only the `clients` pod running on the active controller is running.

Reproducibility
---------------
Reproducible.

System Configuration
--------------------
Two+ controllers system.

Branch/Pull Time/Commit
-----------------------
StarlingX (master)
StarlingX OpenStack (master)

Last Pass
---------
N/A.

Timestamp/Logs
--------------
```
[sysadmin@controller-0 ~(keystone_admin)]$ kubectl -n openstack get pods | grep clients
clients-clients-controller-0-937646f6-pnq6c 1/1 Running 0 9m34s
clients-clients-controller-1-cab72f56-tn252 0/1 Init:0/2 0 9m34s

[sysadmin@controller-0 ~(keystone_admin)]$ kubectl -n openstack describe pod/clients-clients-controller-1-cab72f56-tn252
  Warning FailedMount 2m (x12 over 10m) kubelet MountVolume.SetUp failed for volume "clients-working-directory" : hostPath type check failed: /var/opt/openstack is not a directory
  Warning FailedMount 83s (x3 over 8m11s) kubelet Unable to attach or mount volumes: unmounted volumes=[clients-working-directory], unattached volumes=[kube-api-access-kcr8l pod-tmp clients-bin clients-working-directory]: timed out waiting for the condition
```

Test Activity
-------------
Developer Testing.

Workaround
----------
Manually SSH to the stand-by controller(s) and create the expected working directory(ies).