2019-09-18 20:31:37 |
Anujeyan Manokeran |
bug |
|
|
added bug |
2019-09-20 19:08:31 |
Ghada Khalil |
description |
Brief Description
-----------------
After doing DOR(Dead office Recovery) test calico-node pods crashLoopBackoff as below.
Also alarm appeared lighttpd alarm appeared as below https://bugs.launchpad.net/starlingx/+bug/1844456 .
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | degraded |
| 2 | compute-0 | worker | unlocked | enabled | available |
| 3 | compute-1 | worker | unlocked | enabled | available |
| 4 | compute-2 | worker | unlocked | enabled | available |
| 5 | controller-1 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
[sysadmin@controller-0 log(keystone_admin)]$
controller-0:/var/log# cat lighttpd-error.log
2019-09-17 06:27:49: (server.c.1472) server started (lighttpd/1.4.52)
2019-09-17 06:40:13: (server.c.2067) server stopped by UID = 0 PID = 1
fm alarm-list
+----------+----------------------------------------------------------------------------+------------------------------+----------+---------------------+
| Alarm ID | Reason Text | Entity ID | Severity | Time Stamp |
+----------+----------------------------------------------------------------------------+------------------------------+----------+---------------------+
| 400.001 | Service group web-services degraded; lighttpd(enabled-active, failed) | service_domain=controller. | major | 2019-09-17T20:53:19 |
| | | service_group=web-services. | | .183650 |
| | | host=controller-0 | | |
| | | | | |
| 100.114 | NTP address 2607:5300:60:92e7::1 is not a valid or a reachable NTP server. | host=controller-0.ntp=2607: | minor | 2019-09-17T13:56:32 |
| | | 5300:60:92e7::1 | | .157376 |
| | | | | |
| 100.114 | NTP address 2607:5300:60:3308::1 is not a valid or a reachable NTP server. | host=controller-1.ntp=2607: | minor | 2019-09-17T13:35:00 |
| | | 5300:60:3308::1 | | .785675 |
| | | | | |
+----------+----------------------------------------------------------------------------+------------------------------+----------+---------------------+
kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-767467f9cf-jrx52 1/1 Running 1 4m24s dead:beef::8e22:765f:6121:eb4f controller-0 <none> <none>
calico-node-8f4l2 0/1 Running 5 13h face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
calico-node-9zwtk 1/1 Running 3 14h face::3 controller-0 <none> <none>
calico-node-d4tff 0/1 Running 4 13h face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
calico-node-hmrgr 1/1 Running 2 13h face::4 controller-1 <none> <none>
calico-node-lrbxb 0/1 Running 5 13h face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
ceph-pools-audit-1568752200-g68wf 0/1 Completed 0 24m dead:beef::a4ce:fec1:5423:e331 controller-1 <none> <none>
ceph-pools-audit-1568752500-xwrvw 0/1 Completed 0 19m dead:beef::8e22:765f:6121:eb4a controller-0 <none> <none>
ceph-pools-audit-1568753400-zw4bl 0/1 Completed 0 4m25s dead:beef::8e22:765f:6121:eb4e controller-0 <none> <none>
coredns-7cf476b5c8-4vkzr 1/1 Running 3 7h22m dead:beef::8e22:765f:6121:eb4c controller-0 <none> <none>
coredns-7cf476b5c8-qcxph 1/1 Running 5 7h53m dead:beef::a4ce:fec1:5423:e332 controller-1 <none> <none>
kube-apiserver-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-apiserver-controller-1 1/1 Running 4 13h face::4 controller-1 <none> <none>
kube-controller-manager-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-controller-manager-controller-1 1/1 Running 2 13h face::4 controller-1 <none> <none>
kube-multus-ds-amd64-5l4dj 1/1 Running 0 3m3s face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-multus-ds-amd64-9n2cj 1/1 Running 0 2m48s face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
kube-multus-ds-amd64-jltqk 1/1 Running 1 7h46m face::4 controller-1 <none> <none>
kube-multus-ds-amd64-n5hb8 1/1 Running 0 2m52s face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-multus-ds-amd64-p8zsf 1/1 Running 1 7h15m face::3 controller-0 <none> <none>
kube-proxy-8mc4r 1/1 Running 3 14h face::3 controller-0 <none> <none>
kube-proxy-b72qz 1/1 Running 2 13h face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
kube-proxy-g8k8n 1/1 Running 2 13h face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-proxy-gbvsx 1/1 Running 2 13h face::4 controller-1 <none> <none>
kube-proxy-l5qxx 1/1 Running 1 13h face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-scheduler-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-scheduler-controller-1 1/1 Running 2 13h face::4 controller-1 <none> <none>
kube-sriov-cni-ds-amd64-9hbm8 1/1 Running 0 2m33s face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-sriov-cni-ds-amd64-9q8zk 1/1 Running 0 2m33s face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-sriov-cni-ds-amd64-d8fhg 1/1 Running 1 7h15m face::3 controller-0 <none> <none>
kube-sriov-cni-ds-amd64-dkm9z 1/1 Running 1 7h46m face::4 controller-1 <none> <none>
kube-sriov-cni-ds-amd64-hn5qc 1/1 Running 0 2m33s face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
rbd-provisioner-65db585fd6-6w5r4 1/1 Running 4 7h53m dead:beef::a4ce:fec1:5423:e333 controller-1 <none> <none>
rbd-provisioner-65db585fd6-7kz9p 1/1 Running 3 7h22m dead:beef::8e22:765f:6121:eb4b controller-0 <none> <none>
tiller-deploy-7855f54f57-lrc6b 1/1 Running 1 7h22m face::4 controller-1 <none> <none>
kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-767467f9cf-jrx52 1/1 Running 1 32m dead:beef::8e22:765f:6121:eb4f controller-0 <none> <none>
calico-node-8f4l2 0/1 Running 12 13h face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
calico-node-9zwtk 1/1 Running 3 14h face::3 controller-0 <none> <none>
calico-node-d4tff 0/1 CrashLoopBackOff 12 13h face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
calico-node-hmrgr 1/1 Running 2 14h face::4 controller-1 <none> <none>
calico-node-lrbxb 0/1 CrashLoopBackOff 13 13h face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
ceph-pools-audit-1568754600-n8dtm 0/1 Completed 0 12m dead:beef::8e22:765f:6121:eb56 controller-0 <none> <none>
ceph-pools-audit-1568754900-dbdrq 0/1 Completed 0 7m19s dead:beef::a4ce:fec1:5423:e339 controller-1 <none> <none>
ceph-pools-audit-1568755200-pgbtq 0/1 Completed 0 2m19s dead:beef::a4ce:fec1:5423:e33a controller-1 <none> <none>
coredns-7cf476b5c8-4vkzr 1/1 Running 3 7h49m dead:beef::8e22:765f:6121:eb4c controller-0 <none> <none>
coredns-7cf476b5c8-qcxph 1/1 Running 5 8h dead:beef::a4ce:fec1:5423:e332 controller-1 <none> <none>
kube-apiserver-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-apiserver-controller-1 1/1 Running 4 14h face::4 controller-1 <none> <none>
kube-controller-manager-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-controller-manager-controller-1 1/1 Running 2 14h face::4 controller-1 <none> <none>
kube-multus-ds-amd64-5l4dj 1/1 Running 0 30m face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-multus-ds-amd64-9n2cj 1/1 Running 0 30m face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
kube-multus-ds-amd64-jltqk 1/1 Running 1 8h face::4 controller-1 <none> <none>
kube-multus-ds-amd64-n5hb8 1/1 Running 0 30m face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-multus-ds-amd64-p8zsf 1/1 Running 1 7h42m face::3 controller-0 <none> <none>
kube-proxy-8mc4r 1/1 Running 3 14h face::3 controller-0 <none> <none>
kube-proxy-b72qz 1/1 Running 2 13h face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
kube-proxy-g8k8n 1/1 Running 2 13h face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-proxy-gbvsx 1/1 Running 2 14h face::4 controller-1 <none> <none>
kube-proxy-l5qxx 1/1 Running 1 13h face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-scheduler-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-scheduler-controller-1 1/1 Running 2 14h face::4 controller-1 <none> <none>
kube-sriov-cni-ds-amd64-9hbm8 1/1 Running 0 30m face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-sriov-cni-ds-amd64-9q8zk 1/1 Running 0 30m face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-sriov-cni-ds-amd64-d8fhg 1/1 Running 1 7h42m face::3 controller-0 <none> <none>
kube-sriov-cni-ds-amd64-dkm9z 1/1 Running 1 8h face::4 controller-1 <none> <none>
kube-sriov-cni-ds-amd64-hn5qc 1/1 Running 0 30m face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
rbd-provisioner-65db585fd6-6w5r4 1/1 Running 4 8h dead:beef::a4ce:fec1:5423:e333 controller-1 <none> <none>
rbd-provisioner-65db585fd6-7kz9p 1/1 Running 3 7h49m dead:beef::8e22:765f:6121:eb4b controller-0 <none> <none>
tiller-deploy-7855f54f57-lrc6b 1/1 Running 1 7h49m face::4 controller-1 <none> <none>
[sysadmin@controller-0 ~(keystone_admin)]$
Severity
--------
Major
Steps to Reproduce
------------------
1. Verify health of the system. Verify for any alarms.
2. Power off all the nodes for 60 seconds
3. Power on all the nodes.
4. Verify nodes are up and available .
5. Verify pods are up .As description calico-node pods crashLoopBackoff
System Configuration
--------------------
Regular system with IPv6 configuration WCP71-75
Expected Behavior
------------------
No crash on calico-node pods
Actual Behavior
----------------
As per description calico-node pods crash.
Reproducibility
---------------
100% reproduce able. Seen in 2 different labs.
Load
----
Build was on 2019-09-16_14-18-20
Last Pass
---------
Timestamp/Logs
--------------
2019-09-17T21:02:06
Test Activity
-------------
Regression test |
Brief Description
-----------------
After doing DOR(Dead office Recovery) test calico-node pods crashLoopBackoff as below.
Also alarm appeared lighttpd alarm appeared as below https://bugs.launchpad.net/starlingx/+bug/1844456 .
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | degraded |
| 2 | compute-0 | worker | unlocked | enabled | available |
| 3 | compute-1 | worker | unlocked | enabled | available |
| 4 | compute-2 | worker | unlocked | enabled | available |
| 5 | controller-1 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
[sysadmin@controller-0 log(keystone_admin)]$
controller-0:/var/log# cat lighttpd-error.log
2019-09-17 06:27:49: (server.c.1472) server started (lighttpd/1.4.52)
2019-09-17 06:40:13: (server.c.2067) server stopped by UID = 0 PID = 1
fm alarm-list
+----------+----------------------------------------------------------------------------+------------------------------+----------+---------------------+
| Alarm ID | Reason Text | Entity ID | Severity | Time Stamp |
+----------+----------------------------------------------------------------------------+------------------------------+----------+---------------------+
| 400.001 | Service group web-services degraded; lighttpd(enabled-active, failed) | service_domain=controller. | major | 2019-09-17T20:53:19 |
| | | service_group=web-services. | | .183650 |
| | | host=controller-0 | | |
| | | | | |
| 100.114 | NTP address 2607:5300:60:92e7::1 is not a valid or a reachable NTP server. | host=controller-0.ntp=2607: | minor | 2019-09-17T13:56:32 |
| | | 5300:60:92e7::1 | | .157376 |
| | | | | |
| 100.114 | NTP address 2607:5300:60:3308::1 is not a valid or a reachable NTP server. | host=controller-1.ntp=2607: | minor | 2019-09-17T13:35:00 |
| | | 5300:60:3308::1 | | .785675 |
| | | | | |
+----------+----------------------------------------------------------------------------+------------------------------+----------+---------------------+
kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-767467f9cf-jrx52 1/1 Running 1 4m24s dead:beef::8e22:765f:6121:eb4f controller-0 <none> <none>
calico-node-8f4l2 0/1 Running 5 13h face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
calico-node-9zwtk 1/1 Running 3 14h face::3 controller-0 <none> <none>
calico-node-d4tff 0/1 Running 4 13h face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
calico-node-hmrgr 1/1 Running 2 13h face::4 controller-1 <none> <none>
calico-node-lrbxb 0/1 Running 5 13h face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
ceph-pools-audit-1568752200-g68wf 0/1 Completed 0 24m dead:beef::a4ce:fec1:5423:e331 controller-1 <none> <none>
ceph-pools-audit-1568752500-xwrvw 0/1 Completed 0 19m dead:beef::8e22:765f:6121:eb4a controller-0 <none> <none>
ceph-pools-audit-1568753400-zw4bl 0/1 Completed 0 4m25s dead:beef::8e22:765f:6121:eb4e controller-0 <none> <none>
coredns-7cf476b5c8-4vkzr 1/1 Running 3 7h22m dead:beef::8e22:765f:6121:eb4c controller-0 <none> <none>
coredns-7cf476b5c8-qcxph 1/1 Running 5 7h53m dead:beef::a4ce:fec1:5423:e332 controller-1 <none> <none>
kube-apiserver-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-apiserver-controller-1 1/1 Running 4 13h face::4 controller-1 <none> <none>
kube-controller-manager-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-controller-manager-controller-1 1/1 Running 2 13h face::4 controller-1 <none> <none>
kube-multus-ds-amd64-5l4dj 1/1 Running 0 3m3s face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-multus-ds-amd64-9n2cj 1/1 Running 0 2m48s face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
kube-multus-ds-amd64-jltqk 1/1 Running 1 7h46m face::4 controller-1 <none> <none>
kube-multus-ds-amd64-n5hb8 1/1 Running 0 2m52s face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-multus-ds-amd64-p8zsf 1/1 Running 1 7h15m face::3 controller-0 <none> <none>
kube-proxy-8mc4r 1/1 Running 3 14h face::3 controller-0 <none> <none>
kube-proxy-b72qz 1/1 Running 2 13h face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
kube-proxy-g8k8n 1/1 Running 2 13h face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-proxy-gbvsx 1/1 Running 2 13h face::4 controller-1 <none> <none>
kube-proxy-l5qxx 1/1 Running 1 13h face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-scheduler-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-scheduler-controller-1 1/1 Running 2 13h face::4 controller-1 <none> <none>
kube-sriov-cni-ds-amd64-9hbm8 1/1 Running 0 2m33s face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-sriov-cni-ds-amd64-9q8zk 1/1 Running 0 2m33s face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-sriov-cni-ds-amd64-d8fhg 1/1 Running 1 7h15m face::3 controller-0 <none> <none>
kube-sriov-cni-ds-amd64-dkm9z 1/1 Running 1 7h46m face::4 controller-1 <none> <none>
kube-sriov-cni-ds-amd64-hn5qc 1/1 Running 0 2m33s face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
rbd-provisioner-65db585fd6-6w5r4 1/1 Running 4 7h53m dead:beef::a4ce:fec1:5423:e333 controller-1 <none> <none>
rbd-provisioner-65db585fd6-7kz9p 1/1 Running 3 7h22m dead:beef::8e22:765f:6121:eb4b controller-0 <none> <none>
tiller-deploy-7855f54f57-lrc6b 1/1 Running 1 7h22m face::4 controller-1 <none> <none>
kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-767467f9cf-jrx52 1/1 Running 1 32m dead:beef::8e22:765f:6121:eb4f controller-0 <none> <none>
calico-node-8f4l2 0/1 Running 12 13h face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
calico-node-9zwtk 1/1 Running 3 14h face::3 controller-0 <none> <none>
calico-node-d4tff 0/1 CrashLoopBackOff 12 13h face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
calico-node-hmrgr 1/1 Running 2 14h face::4 controller-1 <none> <none>
calico-node-lrbxb 0/1 CrashLoopBackOff 13 13h face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
ceph-pools-audit-1568754600-n8dtm 0/1 Completed 0 12m dead:beef::8e22:765f:6121:eb56 controller-0 <none> <none>
ceph-pools-audit-1568754900-dbdrq 0/1 Completed 0 7m19s dead:beef::a4ce:fec1:5423:e339 controller-1 <none> <none>
ceph-pools-audit-1568755200-pgbtq 0/1 Completed 0 2m19s dead:beef::a4ce:fec1:5423:e33a controller-1 <none> <none>
coredns-7cf476b5c8-4vkzr 1/1 Running 3 7h49m dead:beef::8e22:765f:6121:eb4c controller-0 <none> <none>
coredns-7cf476b5c8-qcxph 1/1 Running 5 8h dead:beef::a4ce:fec1:5423:e332 controller-1 <none> <none>
kube-apiserver-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-apiserver-controller-1 1/1 Running 4 14h face::4 controller-1 <none> <none>
kube-controller-manager-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-controller-manager-controller-1 1/1 Running 2 14h face::4 controller-1 <none> <none>
kube-multus-ds-amd64-5l4dj 1/1 Running 0 30m face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-multus-ds-amd64-9n2cj 1/1 Running 0 30m face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
kube-multus-ds-amd64-jltqk 1/1 Running 1 8h face::4 controller-1 <none> <none>
kube-multus-ds-amd64-n5hb8 1/1 Running 0 30m face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-multus-ds-amd64-p8zsf 1/1 Running 1 7h42m face::3 controller-0 <none> <none>
kube-proxy-8mc4r 1/1 Running 3 14h face::3 controller-0 <none> <none>
kube-proxy-b72qz 1/1 Running 2 13h face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
kube-proxy-g8k8n 1/1 Running 2 13h face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-proxy-gbvsx 1/1 Running 2 14h face::4 controller-1 <none> <none>
kube-proxy-l5qxx 1/1 Running 1 13h face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-scheduler-controller-0 1/1 Running 6 14h face::3 controller-0 <none> <none>
kube-scheduler-controller-1 1/1 Running 2 14h face::4 controller-1 <none> <none>
kube-sriov-cni-ds-amd64-9hbm8 1/1 Running 0 30m face::fccf:ae0b:600a:f8a4 compute-2 <none> <none>
kube-sriov-cni-ds-amd64-9q8zk 1/1 Running 0 30m face::29dd:f3c5:7eb9:1d21 compute-1 <none> <none>
kube-sriov-cni-ds-amd64-d8fhg 1/1 Running 1 7h42m face::3 controller-0 <none> <none>
kube-sriov-cni-ds-amd64-dkm9z 1/1 Running 1 8h face::4 controller-1 <none> <none>
kube-sriov-cni-ds-amd64-hn5qc 1/1 Running 0 30m face::d299:e6a8:7070:7bb0 compute-0 <none> <none>
rbd-provisioner-65db585fd6-6w5r4 1/1 Running 4 8h dead:beef::a4ce:fec1:5423:e333 controller-1 <none> <none>
rbd-provisioner-65db585fd6-7kz9p 1/1 Running 3 7h49m dead:beef::8e22:765f:6121:eb4b controller-0 <none> <none>
tiller-deploy-7855f54f57-lrc6b 1/1 Running 1 7h49m face::4 controller-1 <none> <none>
[sysadmin@controller-0 ~(keystone_admin)]$
Severity
--------
Major
Steps to Reproduce
------------------
1. Verify health of the system. Verify for any alarms.
2. Power off all the nodes for 60 seconds
3. Power on all the nodes.
4. Verify nodes are up and available .
5. Verify pods are up .As description calico-node pods crashLoopBackoff
System Configuration
--------------------
Regular system with IPv6 configuration
Expected Behavior
------------------
No crash on calico-node pods
Actual Behavior
----------------
As per description calico-node pods crash.
Reproducibility
---------------
100% reproduce able. Seen in 2 different IPv6 labs: WCP71-75 & WCP63-66
Not yet tested with IPv4
Load
----
Build was on 2019-09-16_14-18-20
Last Pass
---------
Timestamp/Logs
--------------
2019-09-17T21:02:06
Test Activity
-------------
Regression test |
|
2019-09-20 19:08:43 |
Ghada Khalil |
tags |
|
stx.3.0 stx.containers stx.networking |
|
2019-09-20 19:09:07 |
Ghada Khalil |
bug |
|
|
added subscriber Bill Zvonar |
2019-09-20 19:09:18 |
Ghada Khalil |
starlingx: importance |
Undecided |
High |
|
2019-09-20 19:09:31 |
Ghada Khalil |
starlingx: assignee |
|
Joseph Richard (josephrichard) |
|
2019-09-20 19:09:33 |
Ghada Khalil |
starlingx: status |
New |
Triaged |
|
2019-10-02 17:42:28 |
Yang Liu |
tags |
stx.3.0 stx.containers stx.networking |
stx.3.0 stx.containers stx.networking stx.retestneeded |
|
2019-10-08 15:04:10 |
Ghada Khalil |
starlingx: status |
Triaged |
Incomplete |
|
2019-10-08 19:42:54 |
Anujeyan Manokeran |
attachment added |
|
logs are same as other launch pad from 1844456 is attached https://bugs.launchpad.net/starlingx/+bug/1844579/+attachment/5295555/+files/ALL_NODES_20190917.201851.tar |
|
2019-10-08 19:44:30 |
Anujeyan Manokeran |
attachment removed |
logs are same as other launch pad from 1844456 is attached https://bugs.launchpad.net/starlingx/+bug/1844579/+attachment/5295555/+files/ALL_NODES_20190917.201851.tar |
|
|
2019-10-08 19:46:31 |
Anujeyan Manokeran |
attachment added |
|
collect logs https://bugs.launchpad.net/starlingx/+bug/1844579/+attachment/5295556/+files/ALL_NODES_20190917.210342.tar |
|
2019-10-09 15:51:22 |
Joseph Richard |
marked as duplicate |
|
1844192 |
|
2019-10-09 18:08:31 |
Ghada Khalil |
removed duplicate marker |
1844192 |
|
|
2019-10-09 18:08:43 |
Ghada Khalil |
starlingx: status |
Incomplete |
Fix Released |
|
2019-10-09 18:08:54 |
Ghada Khalil |
marked as duplicate |
|
1844192 |
|
2019-10-23 11:09:39 |
Anujeyan Manokeran |
attachment added |
|
Re-testifailure logs https://bugs.launchpad.net/starlingx/+bug/1844579/+attachment/5299457/+files/ALL_NODES_20191022.234936.tar |
|
2019-10-23 14:35:33 |
Yang Liu |
removed duplicate marker |
1844192 |
|
|
2019-10-23 14:35:39 |
Yang Liu |
starlingx: status |
Fix Released |
Confirmed |
|
2019-10-30 22:09:27 |
Bill Zvonar |
removed subscriber Bill Zvonar |
|
|
|
2019-11-01 19:46:09 |
OpenStack Infra |
starlingx: status |
Confirmed |
In Progress |
|
2019-11-12 22:45:16 |
Ghada Khalil |
bug |
|
|
added subscriber Bill Zvonar |
2019-11-14 23:35:11 |
Ghada Khalil |
bug |
|
|
added subscriber Daniel Badea |
2019-11-18 23:08:03 |
Bill Zvonar |
removed subscriber Bill Zvonar |
|
|
|
2019-11-19 21:20:19 |
OpenStack Infra |
starlingx: status |
In Progress |
Fix Released |
|
2019-12-12 16:33:48 |
OpenStack Infra |
tags |
stx.3.0 stx.containers stx.networking stx.retestneeded |
in-f-centos8 stx.3.0 stx.containers stx.networking stx.retestneeded |
|
2019-12-12 16:33:49 |
OpenStack Infra |
cve linked |
|
2018-12327 |
|
2019-12-12 16:33:49 |
OpenStack Infra |
cve linked |
|
2018-15686 |
|
2019-12-12 16:33:49 |
OpenStack Infra |
cve linked |
|
2019-14287 |
|
2019-12-13 14:31:19 |
ayyappa |
tags |
in-f-centos8 stx.3.0 stx.containers stx.networking stx.retestneeded |
in-f-centos8 stx.3.0 stx.containers stx.networking |
|
2019-12-30 16:00:56 |
Ghada Khalil |
cve unlinked |
2018-12327 |
|
|
2019-12-30 16:01:10 |
Ghada Khalil |
cve unlinked |
2018-15686 |
|
|
2019-12-30 16:01:20 |
Ghada Khalil |
cve unlinked |
2019-14287 |
|
|