Brief Description
-----------------
When trying to apply stx-openstack latest_build (Build date: 01-Sep-2022 04:25) on a environment with stx (BUILD_ID="20220806T041101Z"), it fails to deploy osh-openstack-garbd
Severity
--------
Provide the severity of the defect.
Critical. In standards/storage environments the garbd pod looks like to never be able to be deployed
Steps to Reproduce
------------------
Apply stx-openstack in a environment where the garbd pod is needed
Expected Behavior
------------------
stx-openstack reaches 'applied' status
Actual Behavior
----------------
stx-openstack reaches 'apply-failed' status
Reproducibility
---------------
Reproducible
System Configuration
--------------------
Standard/Storage/DX+
Timestamp/Logs
--------------
Armada log:
2022-08-25 08:14:48.458 118 ERROR armada.cli Traceback (most recent call last):
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke
2022-08-25 08:14:48.458 118 ERROR armada.cli self.invoke()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 219, in invoke
2022-08-25 08:14:48.458 118 ERROR armada.cli resp = self.handle(documents, tiller)
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper
2022-08-25 08:14:48.458 118 ERROR armada.cli return future.result()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
2022-08-25 08:14:48.458 118 ERROR armada.cli return self.__get_result()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
2022-08-25 08:14:48.458 118 ERROR armada.cli raise self._exception
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
2022-08-25 08:14:48.458 118 ERROR armada.cli result = self.fn(*self.args, **self.kwargs)
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 267, in handle
2022-08-25 08:14:48.458 118 ERROR armada.cli return armada.sync()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 118, in sync
2022-08-25 08:14:48.458 118 ERROR armada.cli return self._sync()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 198, in _sync
2022-08-25 08:14:48.458 118 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures)
2022-08-25 08:14:48.458 118 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['openstack-garbd']
Pod logs:
controller-0:~$ kubectl get po -n openstack
NAME READY STATUS RESTARTS AGE
ingress-7695dffc88-cplrz 1/1 Running 0 5h
ingress-7695dffc88-jp5mq 1/1 Running 0 5h
ingress-error-pages-797bcfb495-fn4dw 1/1 Running 0 5h
ingress-error-pages-797bcfb495-wjcwn 1/1 Running 0 5h
mariadb-ingress-58f4fb6949-gbdpb 1/1 Running 0 5h
mariadb-ingress-58f4fb6949-hm6xp 1/1 Running 0 5h
mariadb-ingress-error-pages-7dc698877c-9f2zl 1/1 Running 0 5h
mariadb-server-0 1/1 Running 0 5h
mariadb-server-1 1/1 Running 0 5h
osh-openstack-garbd-garbd-7b65dc7bff-rl2q2 0/1 CrashLoopBackOff 63 (25s ago) 4h57m
controller-0:~$ kubectl -n openstack logs osh-openstack-garbd-garbd-7b65dc7bff-rl2q2
+ exec garbd --group=mariadb-server_openstack --address=gcomm://mariadb-server-0.mariadb-discovery.openstack.svc.cluster.local,mariadb-server-1.mariadb-discovery.openstack.svc.cluster.local
2022-08-25 12:41:36.889 INFO: CRC-32C: using hardware acceleration.
2022-08-25 12:41:36.889 INFO: Read config:
daemon: 0
name: garb
address: gcomm://mariadb-server-0.mariadb-discovery.openstack.svc.cluster.local,mariadb-server-1.mariadb-discovery.openstack.svc.cluster.local
group: mariadb-server_openstack
sst: trivial
donor:
options: gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_master_slave=yes
cfg:
log: 2022-08-25 12:41:36.890 INFO: protonet asio version 0
2022-08-25 12:41:36.890 INFO: Using CRC-32C for message checksums.
2022-08-25 12:41:36.890 INFO: backend: asio
2022-08-25 12:41:36.890 INFO: gcomm thread scheduling priority set to other:0
2022-08-25 12:41:36.890 WARN: access file(./gvwstate.dat) failed(No such file or directory)
2022-08-25 12:41:36.890 INFO: restore pc from disk failed
2022-08-25 12:41:36.890 INFO: GMCast version 0
2022-08-25 12:41:36.890 FATAL: Exception in creating receive loop.
Brief Description ID="20220806T04 1101Z") , it fails to deploy osh-openstack-garbd
-----------------
When trying to apply stx-openstack latest_build (Build date: 01-Sep-2022 04:25) on a environment with stx (BUILD_
Severity
--------
Provide the severity of the defect.
Critical. In standards/storage environments the garbd pod looks like to never be able to be deployed
Steps to Reproduce
------------------
Apply stx-openstack in a environment where the garbd pod is needed
Expected Behavior
------------------
stx-openstack reaches 'applied' status
Actual Behavior
----------------
stx-openstack reaches 'apply-failed' status
Reproducibility
---------------
Reproducible
System Configuration ------- ------ Storage/ DX+
-------
Standard/
Timestamp/Logs
--------------
Armada log:
2022-08-25 08:14:48.458 118 ERROR armada.cli Traceback (most recent call last): lib/python3. 6/dist- packages/ armada/ cli/__init_ _.py", line 38, in safe_invoke lib/python3. 6/dist- packages/ armada/ cli/apply. py", line 219, in invoke documents, tiller) lib/python3. 6/dist- packages/ armada/ handlers/ lock.py" , line 81, in func_wrapper python3. 6/concurrent/ futures/ _base.py" , line 425, in result python3. 6/concurrent/ futures/ _base.py" , line 384, in __get_result python3. 6/concurrent/ futures/ thread. py", line 56, in run lib/python3. 6/dist- packages/ armada/ cli/apply. py", line 267, in handle lib/python3. 6/dist- packages/ armada/ handlers/ armada. py", line 118, in sync lib/python3. 6/dist- packages/ armada/ handlers/ armada. py", line 198, in _sync exceptions. ChartDeployExce ption(failures) exceptions. armada_ exceptions. ChartDeployExce ption: Exception deploying charts: ['openstack-garbd']
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/
2022-08-25 08:14:48.458 118 ERROR armada.cli self.invoke()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/
2022-08-25 08:14:48.458 118 ERROR armada.cli resp = self.handle(
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/
2022-08-25 08:14:48.458 118 ERROR armada.cli return future.result()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/
2022-08-25 08:14:48.458 118 ERROR armada.cli return self.__get_result()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/
2022-08-25 08:14:48.458 118 ERROR armada.cli raise self._exception
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/
2022-08-25 08:14:48.458 118 ERROR armada.cli result = self.fn(*self.args, **self.kwargs)
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/
2022-08-25 08:14:48.458 118 ERROR armada.cli return armada.sync()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/
2022-08-25 08:14:48.458 118 ERROR armada.cli return self._sync()
2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/
2022-08-25 08:14:48.458 118 ERROR armada.cli raise armada_
2022-08-25 08:14:48.458 118 ERROR armada.cli armada.
Pod logs:
controller-0:~$ kubectl get po -n openstack 7695dffc88- cplrz 1/1 Running 0 5h 7695dffc88- jp5mq 1/1 Running 0 5h error-pages- 797bcfb495- fn4dw 1/1 Running 0 5h error-pages- 797bcfb495- wjcwn 1/1 Running 0 5h ingress- 58f4fb6949- gbdpb 1/1 Running 0 5h ingress- 58f4fb6949- hm6xp 1/1 Running 0 5h ingress- error-pages- 7dc698877c- 9f2zl 1/1 Running 0 5h garbd-garbd- 7b65dc7bff- rl2q2 0/1 CrashLoopBackOff 63 (25s ago) 4h57m garbd-garbd- 7b65dc7bff- rl2q2 mariadb- server_ openstack --address= gcomm:/ /mariadb- server- 0.mariadb- discovery. openstack. svc.cluster. local,mariadb- server- 1.mariadb- discovery. openstack. svc.cluster. local /mariadb- server- 0.mariadb- discovery. openstack. svc.cluster. local,mariadb- server- 1.mariadb- discovery. openstack. svc.cluster. local server_ openstack limit=9999999; gcs.fc_factor=1.0; gcs.fc_ master_ slave=yes gvwstate. dat) failed(No such file or directory)
NAME READY STATUS RESTARTS AGE
ingress-
ingress-
ingress-
ingress-
mariadb-
mariadb-
mariadb-
mariadb-server-0 1/1 Running 0 5h
mariadb-server-1 1/1 Running 0 5h
osh-openstack-
controller-0:~$ kubectl -n openstack logs osh-openstack-
+ exec garbd --group=
2022-08-25 12:41:36.889 INFO: CRC-32C: using hardware acceleration.
2022-08-25 12:41:36.889 INFO: Read config:
daemon: 0
name: garb
address: gcomm:/
group: mariadb-
sst: trivial
donor:
options: gcs.fc_
cfg:
log: 2022-08-25 12:41:36.890 INFO: protonet asio version 0
2022-08-25 12:41:36.890 INFO: Using CRC-32C for message checksums.
2022-08-25 12:41:36.890 INFO: backend: asio
2022-08-25 12:41:36.890 INFO: gcomm thread scheduling priority set to other:0
2022-08-25 12:41:36.890 WARN: access file(./
2022-08-25 12:41:36.890 INFO: restore pc from disk failed
2022-08-25 12:41:36.890 INFO: GMCast version 0
2022-08-25 12:41:36.890 FATAL: Exception in creating receive loop.
Test Activity
-------------
Sanity
Workaround
----------
N/A