Activity log for bug #1988469

Date Who What changed Old value New value Message
2022-09-01 18:25:46 Rafael Falcão bug added bug
2022-09-01 18:27:03 Thales Elero Cervi bug added subscriber Thales Elero Cervi
2022-09-02 12:28:46 Rafael Falcão starlingx: assignee Rafael Falcão (rafaelvfalc)
2022-09-02 19:51:18 Rafael Falcão description Brief Description ----------------- When trying to apply stx-openstack latest_build (Build date: 01-Sep-2022 04:25) on a environment with stx (BUILD_ID="20220806T041101Z"), it fails to deploy osh-openstack-garbd Severity -------- Provide the severity of the defect. Critical. In standards/storage environments the garbd pod looks like to never be able to be deployed Steps to Reproduce ------------------ Apply stx-openstack in a environment where the garbd pod is needed Expected Behavior ------------------ stx-openstack reaches 'applied' status Actual Behavior ---------------- stx-openstack reaches 'apply-failed' status Reproducibility --------------- Reproducible System Configuration -------------------- Standard/Storage/DX+ Timestamp/Logs -------------- Armada log: 2022-08-25 08:14:48.458 118 ERROR armada.cli Traceback (most recent call last): 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke 2022-08-25 08:14:48.458 118 ERROR armada.cli self.invoke() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 219, in invoke 2022-08-25 08:14:48.458 118 ERROR armada.cli resp = self.handle(documents, tiller) 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper 2022-08-25 08:14:48.458 118 ERROR armada.cli return future.result() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2022-08-25 08:14:48.458 118 ERROR armada.cli return self.__get_result() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2022-08-25 08:14:48.458 118 ERROR armada.cli raise self._exception 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2022-08-25 08:14:48.458 118 ERROR armada.cli result = self.fn(*self.args, **self.kwargs) 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 267, in handle 2022-08-25 08:14:48.458 118 ERROR armada.cli return armada.sync() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 118, in sync 2022-08-25 08:14:48.458 118 ERROR armada.cli return self._sync() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 198, in _sync 2022-08-25 08:14:48.458 118 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2022-08-25 08:14:48.458 118 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['openstack-garbd'] Pod logs: controller-0:~$ kubectl get po -n openstack NAME READY STATUS RESTARTS AGE ingress-7695dffc88-cplrz 1/1 Running 0 5h ingress-7695dffc88-jp5mq 1/1 Running 0 5h ingress-error-pages-797bcfb495-fn4dw 1/1 Running 0 5h ingress-error-pages-797bcfb495-wjcwn 1/1 Running 0 5h mariadb-ingress-58f4fb6949-gbdpb 1/1 Running 0 5h mariadb-ingress-58f4fb6949-hm6xp 1/1 Running 0 5h mariadb-ingress-error-pages-7dc698877c-9f2zl 1/1 Running 0 5h mariadb-server-0 1/1 Running 0 5h mariadb-server-1 1/1 Running 0 5h osh-openstack-garbd-garbd-7b65dc7bff-rl2q2 0/1 CrashLoopBackOff 63 (25s ago) 4h57m controller-0:~$ kubectl -n openstack logs osh-openstack-garbd-garbd-7b65dc7bff-rl2q2 + exec garbd --group=mariadb-server_openstack --address=gcomm://mariadb-server-0.mariadb-discovery.openstack.svc.cluster.local,mariadb-server-1.mariadb-discovery.openstack.svc.cluster.local 2022-08-25 12:41:36.889 INFO: CRC-32C: using hardware acceleration. 2022-08-25 12:41:36.889 INFO: Read config: daemon: 0 name: garb address: gcomm://mariadb-server-0.mariadb-discovery.openstack.svc.cluster.local,mariadb-server-1.mariadb-discovery.openstack.svc.cluster.local group: mariadb-server_openstack sst: trivial donor: options: gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_master_slave=yes cfg: log: 2022-08-25 12:41:36.890 INFO: protonet asio version 0 2022-08-25 12:41:36.890 INFO: Using CRC-32C for message checksums. 2022-08-25 12:41:36.890 INFO: backend: asio 2022-08-25 12:41:36.890 INFO: gcomm thread scheduling priority set to other:0 2022-08-25 12:41:36.890 WARN: access file(./gvwstate.dat) failed(No such file or directory) 2022-08-25 12:41:36.890 INFO: restore pc from disk failed 2022-08-25 12:41:36.890 INFO: GMCast version 0 2022-08-25 12:41:36.890 FATAL: Exception in creating receive loop. Test Activity ------------- Sanity Workaround ---------- N/A Brief Description ----------------- When trying to apply stx-openstack latest_build (Build date: 01-Sep-2022 04:25) on a environment with stx (BUILD_ID="20220806T041101Z"), it fails to deploy osh-openstack-garbd Severity -------- Provide the severity of the defect. Critical. In standards/storage environments the garbd pod looks like to never be able to be deployed Steps to Reproduce ------------------ Apply stx-openstack in a environment where the garbd pod is needed Expected Behavior ------------------ stx-openstack reaches 'applied' status Actual Behavior ---------------- stx-openstack reaches 'apply-failed' status Reproducibility --------------- Intermittent System Configuration -------------------- Standard/Storage/DX+ Timestamp/Logs -------------- Armada log: 2022-08-25 08:14:48.458 118 ERROR armada.cli Traceback (most recent call last): 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke 2022-08-25 08:14:48.458 118 ERROR armada.cli self.invoke() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 219, in invoke 2022-08-25 08:14:48.458 118 ERROR armada.cli resp = self.handle(documents, tiller) 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper 2022-08-25 08:14:48.458 118 ERROR armada.cli return future.result() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2022-08-25 08:14:48.458 118 ERROR armada.cli return self.__get_result() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2022-08-25 08:14:48.458 118 ERROR armada.cli raise self._exception 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2022-08-25 08:14:48.458 118 ERROR armada.cli result = self.fn(*self.args, **self.kwargs) 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 267, in handle 2022-08-25 08:14:48.458 118 ERROR armada.cli return armada.sync() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 118, in sync 2022-08-25 08:14:48.458 118 ERROR armada.cli return self._sync() 2022-08-25 08:14:48.458 118 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 198, in _sync 2022-08-25 08:14:48.458 118 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2022-08-25 08:14:48.458 118 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['openstack-garbd'] Pod logs: controller-0:~$ kubectl get po -n openstack NAME READY STATUS RESTARTS AGE ingress-7695dffc88-cplrz 1/1 Running 0 5h ingress-7695dffc88-jp5mq 1/1 Running 0 5h ingress-error-pages-797bcfb495-fn4dw 1/1 Running 0 5h ingress-error-pages-797bcfb495-wjcwn 1/1 Running 0 5h mariadb-ingress-58f4fb6949-gbdpb 1/1 Running 0 5h mariadb-ingress-58f4fb6949-hm6xp 1/1 Running 0 5h mariadb-ingress-error-pages-7dc698877c-9f2zl 1/1 Running 0 5h mariadb-server-0 1/1 Running 0 5h mariadb-server-1 1/1 Running 0 5h osh-openstack-garbd-garbd-7b65dc7bff-rl2q2 0/1 CrashLoopBackOff 63 (25s ago) 4h57m controller-0:~$ kubectl -n openstack logs osh-openstack-garbd-garbd-7b65dc7bff-rl2q2 + exec garbd --group=mariadb-server_openstack --address=gcomm://mariadb-server-0.mariadb-discovery.openstack.svc.cluster.local,mariadb-server-1.mariadb-discovery.openstack.svc.cluster.local 2022-08-25 12:41:36.889 INFO: CRC-32C: using hardware acceleration. 2022-08-25 12:41:36.889 INFO: Read config:         daemon: 0         name: garb         address: gcomm://mariadb-server-0.mariadb-discovery.openstack.svc.cluster.local,mariadb-server-1.mariadb-discovery.openstack.svc.cluster.local         group: mariadb-server_openstack         sst: trivial         donor:         options: gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_master_slave=yes         cfg:         log: 2022-08-25 12:41:36.890 INFO: protonet asio version 0 2022-08-25 12:41:36.890 INFO: Using CRC-32C for message checksums. 2022-08-25 12:41:36.890 INFO: backend: asio 2022-08-25 12:41:36.890 INFO: gcomm thread scheduling priority set to other:0 2022-08-25 12:41:36.890 WARN: access file(./gvwstate.dat) failed(No such file or directory) 2022-08-25 12:41:36.890 INFO: restore pc from disk failed 2022-08-25 12:41:36.890 INFO: GMCast version 0 2022-08-25 12:41:36.890 FATAL: Exception in creating receive loop. Test Activity ------------- Sanity Workaround ---------- N/A
2022-09-07 17:34:42 OpenStack Infra starlingx: status New In Progress
2022-09-10 01:01:12 Ghada Khalil tags stx.distro.openstack
2022-09-10 01:01:33 Ghada Khalil starlingx: importance Undecided High
2022-09-10 01:01:43 Ghada Khalil tags stx.distro.openstack stx.8.0 stx.distro.openstack
2022-09-13 13:21:49 OpenStack Infra starlingx: status In Progress Fix Released