Some more information from: /var/log/armada/stx-openstack-apply_2022-05-22-08-45-38.log ... 2022-05-22 09:13:43.700 138 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2022-05-22 09:14:43.774 138 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2022-05-22 09:15:43.841 138 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2022-05-22 09:15:44.582 138 ERROR armada.handlers.wait [-] [chart=openstack-nova-api-proxy]: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-nova-api-proxy)). These pods were not ready=['nova-api-proxy-667468b59d-vzzsb'] 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada [-] Chart deploy [openstack-nova-api-proxy] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-nova-api-proxy)). These pods were not ready=['nova-api-proxy-667468b59d-vzzsb'] 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada Traceback (most recent call last): 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 170, in handle_result 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada result = get_result() 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada return self.__get_result() 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada raise self._exception 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada result = self.fn(*self.args, **self.kwargs) 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 159, in deploy_chart 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada concurrency) 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 55, in execute 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada ch, cg_test_all_charts, prefix, known_releases) 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 267, in _execute 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada chart_wait.wait(timer) 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 142, in wait 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada wait.wait(timeout=timeout) 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 302, in wait 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada modified = self._wait(deadline) 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/decorator.py", line 232, in fun 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada return caller(func, *(extras + args), **kw) 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/retry/api.py", line 74, in retry_decorator 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada logger) 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/retry/api.py", line 33, in __retry_internal 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada return f() 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 372, in _wait 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada raise k8s_exceptions.KubernetesWatchTimeoutException(error) 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-nova-api-proxy)). These pods were not ready=['nova-api-proxy-667468b59d-vzzsb'] 2022-05-22 09:15:44.582 138 ERROR armada.handlers.armada 2022-05-22 09:15:44.586 138 ERROR armada.handlers.armada [-] Chart deploy(s) failed: ['openstack-nova-api-proxy'] 2022-05-22 09:15:44.851 138 INFO armada.handlers.lock [-] Releasing lock 2022-05-22 09:15:44.856 138 ERROR armada.cli [-] Caught internal exception: armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['openstack-nova-api-proxy'] 2022-05-22 09:15:44.856 138 ERROR armada.cli Traceback (most recent call last): 2022-05-22 09:15:44.856 138 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke 2022-05-22 09:15:44.856 138 ERROR armada.cli self.invoke() 2022-05-22 09:15:44.856 138 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 219, in invoke 2022-05-22 09:15:44.856 138 ERROR armada.cli resp = self.handle(documents, tiller) 2022-05-22 09:15:44.856 138 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper 2022-05-22 09:15:44.856 138 ERROR armada.cli return future.result() 2022-05-22 09:15:44.856 138 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2022-05-22 09:15:44.856 138 ERROR armada.cli return self.__get_result() 2022-05-22 09:15:44.856 138 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2022-05-22 09:15:44.856 138 ERROR armada.cli raise self._exception 2022-05-22 09:15:44.856 138 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2022-05-22 09:15:44.856 138 ERROR armada.cli result = self.fn(*self.args, **self.kwargs) 2022-05-22 09:15:44.856 138 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 267, in handle 2022-05-22 09:15:44.856 138 ERROR armada.cli return armada.sync() 2022-05-22 09:15:44.856 138 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 118, in sync 2022-05-22 09:15:44.856 138 ERROR armada.cli return self._sync() 2022-05-22 09:15:44.856 138 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 198, in _sync 2022-05-22 09:15:44.856 138 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2022-05-22 09:15:44.856 138 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['openstack-nova-api-proxy'] 2022-05-22 09:15:44.856 138 ERROR armada.cli command terminated with exit code 1 the following pods are in Evited state: fm-rest-api-5dcd9d9484-fjbq6 fm-rest-api-5dcd9d9484-l2gcn horizon-7677cfcc65-d4j22 ingress-84c5f4749f-sl7j7 mariadb-ingress-bcd8fb475-sjp59 nova-api-proxy-667468b59d-lq2lx nova-api-proxy-667468b59d-vzzsb placement-api-796896f544-4db8b Describing some, I can see this: Warning FailedScheduling 11h (x22 over 11h) default-scheduler 0/2 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity rules, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had taint {node.kuber │ │ netes.io/disk-pressure: }, that the pod didn't tolerate. │ │ Normal Scheduled 11h default-scheduler Successfully assigned openstack/nova-api-proxy-667468b59d-lq2lx to controller-1 │ │ Normal AddedInterface 11h multus Add eth0 [172.16.166.173/32] │ │ Normal Pulled 11h kubelet, controller-1 Container image "registry.local:9001/quay.io/stackanetes/kubernetes-entrypoint:v0.3.1" already present on machine │ │ Normal Created 11h kubelet, controller-1 Created container init │ │ Normal Started 11h kubelet, controller-1 Started container init │ │ Normal Pulled 11h kubelet, controller-1 Container image "registry.local:9001/docker.io/starlingx/stx-nova-api-proxy:master-centos-stable-20220519T055212Z.0" already present on machine │ │ Normal Created 11h kubelet, controller-1 Created container nova-api-proxy │ │ Normal Started 11h kubelet, controller-1 Started container nova-api-proxy │ │ Warning Unhealthy 11h kubelet, controller-1 Readiness probe failed: dial tcp 172.16.166.173:8774: connect: connection refused │ │ Warning Evicted 10h kubelet, controller-1 The node was low on resource: ephemeral-storage. Container nova-api-proxy was using 9036Ki, which exceeds its request of 0. │ │ Normal Killing 10h kubelet, controller-1 Stopping container nova-api-proxy