Applying below workaround:
1. kubectl -n kube-system edit configmap coredns 2. remove "loop" line and save 3. kubectl -n kube-system delete rs {your-coredns-replicaset-name}
A behavior is observed, always is freeze at 40% on glance pod until 30 mis timeout is reached, below are the info on the logs:
- There are some cycles while boostrap pod is on CrashLoopBackOff before the timeout is reached.
==================== openstack glance-bootstrap-htn9q 0/1 CrashLoopBackOff 5 7m51s ====================
- The sysinv.log shows:
============= 2019-02-18 15:47:08.578 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-db-sync _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:523 2019-02-18 15:47:08.578 44 DEBUG armada.handlers.k8s [-] Job glance-db-sync complete (spec.completions=1, status.succeeded=1) _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:536 2019-02-18 15:47:19.870 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-ks-endpoints _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:523 2019-02-18 15:47:19.870 44 DEBUG armada.handlers.k8s [-] Job glance-ks-endpoints complete (spec.completions=1, status.succeeded=1) _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:536 2019-02-18 15:47:25.070 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-ks-user _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:523 2019-02-18 15:47:25.070 44 DEBUG armada.handlers.k8s [-] Job glance-ks-user complete (spec.completions=1, status.succeeded=1) _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:536 2019-02-18 15:47:33.262 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-storage-init _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:523 2019-02-18 15:47:33.263 44 DEBUG armada.handlers.k8s [-] Job glance-storage-init complete (spec.completions=1, status.succeeded=1) _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:536 2019-02-18 15:55:07.438 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-bootstrap _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:523 2019-02-18 15:55:07.439 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-bootstrap _watch_job_completion /usr/local/lib/python3.5/site-packages/armada/handlers/k8s.py:523 ============
Applying below workaround:
1. kubectl -n kube-system edit configmap coredns replicaset- name}
2. remove "loop" line and save
3. kubectl -n kube-system delete rs {your-coredns-
A behavior is observed, always is freeze at 40% on glance pod until 30 mis timeout is reached, below are the info on the logs:
- There are some cycles while boostrap pod is on CrashLoopBackOff before the timeout is reached.
======= ======= ====== bootstrap- htn9q 0/1 CrashLoopBackOff 5 7m51s ======= ======
openstack glance-
=======
- The sysinv.log shows:
============= job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 523 ns=1, status.succeeded=1) _watch_ job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 536 job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 523 ns=1, status.succeeded=1) _watch_ job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 536 job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 523 ns=1, status.succeeded=1) _watch_ job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 536 job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 523 ns=1, status.succeeded=1) _watch_ job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 536 job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 523 job_completion /usr/local/ lib/python3. 5/site- packages/ armada/ handlers/ k8s.py: 523
2019-02-18 15:47:08.578 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-db-sync _watch_
2019-02-18 15:47:08.578 44 DEBUG armada.handlers.k8s [-] Job glance-db-sync complete (spec.completio
2019-02-18 15:47:19.870 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-ks-endpoints _watch_
2019-02-18 15:47:19.870 44 DEBUG armada.handlers.k8s [-] Job glance-ks-endpoints complete (spec.completio
2019-02-18 15:47:25.070 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-ks-user _watch_
2019-02-18 15:47:25.070 44 DEBUG armada.handlers.k8s [-] Job glance-ks-user complete (spec.completio
2019-02-18 15:47:33.262 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-storage-init _watch_
2019-02-18 15:47:33.263 44 DEBUG armada.handlers.k8s [-] Job glance-storage-init complete (spec.completio
2019-02-18 15:55:07.438 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-bootstrap _watch_
2019-02-18 15:55:07.439 44 DEBUG armada.handlers.k8s [-] Watch event MODIFIED on job glance-bootstrap _watch_
============