cleaner pods stuck at init status after system installed

Bug #1835428 reported by Peng Peng
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Triaged
Low
Unassigned

Bug Description

Brief Description
-----------------
After lab installation is done, two cleaner pods, heat-engine-cleaner and panko-events-cleaner, stuck at init status.

Severity
--------
Major

Steps to Reproduce
------------------
Installation SX lab, check pods status

TC-name:

Expected Behavior
------------------
All pods should be in Running or Completed status

Actual Behavior
----------------
two cleaner pod in init status

Reproducibility
---------------
Seen once

System Configuration
--------------------
One node system

Lab-name: SM-3

Branch/Pull Time/Commit
-----------------------
stx master as of 20190704T013000Z

Last Pass
---------
Lab: SM_2
Load: 20190701T233000Z

Timestamp/Logs
--------------
[2019-07-04 08:30:09,278] 301 DEBUG MainThread ssh.send :: Send 'kubectl get pod --all-namespaces -o=wide --field-selector=status.phase!=Running,status.phase!=Succeeded'
[2019-07-04 08:30:09,584] 423 DEBUG MainThread ssh.expect :: Output:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
openstack cinder-volume-usage-audit-1562227800-sbp2b 0/1 Init:0/1 0 20m <none> controller-0 <none> <none>
openstack heat-engine-cleaner-1562227800-fnwwn 0/1 Init:0/1 0 20m <none> controller-0 <none> <none>
openstack panko-events-cleaner-1562227800-5xn8t 0/1 Init:0/1 0 20m <none> controller-0 <none> <none>

[2019-07-04 08:44:34,198] 301 DEBUG MainThread ssh.send :: Send 'kubectl get pod --all-namespaces -o=wide --field-selector=status.phase!=Running,status.phase!=Succeeded'
[2019-07-04 08:44:34,458] 423 DEBUG MainThread ssh.expect :: Output:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
openstack cinder-volume-usage-audit-1562227800-sbp2b 0/1 Init:0/1 0 34m <none> controller-0 <none> <none>
openstack heat-engine-cleaner-1562227800-fnwwn 0/1 Init:0/1 0 34m <none> controller-0 <none> <none>
openstack panko-events-cleaner-1562227800-5xn8t 0/1 Init:0/1 0 34m <none> controller-0 <none> <none>

controller-0:~$ kubectl get pod --all-namespaces -o=wide --field-selector=status.phase!=Running,status.phase!=Succeeded
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system ceph-pools-audit-1562250900-zzjjm 0/1 Pending 0 4m4s <none> controller-0 <none> <none>
openstack barbican-api-86d76774f8-rrbks 0/1 Init:0/1 0 7h13m 172.16.192.217 controller-0 <none> <none>
openstack cinder-volume-usage-audit-1562227800-sbp2b 0/1 Init:0/1 0 6h29m <none> controller-0 <none> <none>
openstack heat-engine-cleaner-1562227800-fnwwn 0/1 Init:0/1 0 6h29m <none> controller-0 <none> <none>
openstack horizon-6c76bc9b86-ll4ms 0/1 Pending 0 4m4s <none> controller-0 <none> <none>
openstack nova-api-osapi-56894c5dc4-kxf5n 0/1 Init:0/1 3 7h6m 172.16.192.222 controller-0 <none> <none>
openstack nova-scheduler-58444cb798-s6nt6 0/1 Init:0/1 3 7h6m 172.16.192.223 controller-0 <none> <none>
openstack panko-events-cleaner-1562227800-5xn8t 0/1 Init:0/1 0 6h29m <none> controller-0 <none> <none>
controller-0:~$ date
Thu Jul 4 14:40:11 UTC 2019

Test Activity
-------------
Sanity

Revision history for this message
Peng Peng (ppeng) wrote :
Revision history for this message
Ghada Khalil (gkhalil) wrote :

@Peng, Is there any system impact to this issue?

Revision history for this message
Peng Peng (ppeng) wrote :

TC failed when checking pod status. System seems no major impact.

Revision history for this message
Ghada Khalil (gkhalil) wrote :

Low priority / not gating -- issue seen once and has no system impact.

tags: added: stx.containers
Changed in starlingx:
status: New → Triaged
importance: Undecided → Low
Numan Waheed (nwaheed)
tags: added: stx.retestneeded
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.