[k8s-R5.0]: Improper cleanup of k8s objects

Bug #1764490 reported by Pulkit Tandon on 2018-04-16
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R5.0
Invalid
High
Prasanna Mucharikar
Trunk
Invalid
High
Prasanna Mucharikar

Bug Description

BUG Tempelate:

Configuration:
K8s 1.9.2
coat-5.0-15
Centos-7.4

Setup:
5 node setup.
1 Kube master. 3 Controller.
2 Agent+ K8s slaves

The issues was observed in a k8s sanity run:
LogsLocation : http://10.204.216.50/Docs/logs/5.0-15_2018_04_16_17_17_20_1523889114.59/logs/
Report : http://10.204.216.50/Docs/logs/5.0-15_2018_04_16_17_17_20_1523889114.59/junit-noframes.html

Description:
By default cluster_project is not set.
This result in creating of a project per namespace.
The VMI/port created for a k8s service does not get deleted from the project.
Thus, on deleting the namespace in k8s, contrail do not delete the corresponding project as a VMI reference still exist.

Also, some stale pod VMIs also found in few projects.

Pulkit Tandon (pulkitt) wrote :

Reducing the importance from "critical" to "high" as the issue is sporadic.

Prasanna Mucharikar (mprasanna) wrote :

Not seen with build 16. Waiting for confirmation on later builds.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers