Murano doesn't delete stack after failed deployment of Kubernetes Cluster
Bug #1461564 reported by
Anastasia Kuznetsova
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Mirantis OpenStack |
Won't Fix
|
High
|
MOS Murano |
Bug Description
Steps to reproduce:
1. Create Murano environment
2. Add Kubernetes Cluster, Kubernetes Pod and any docker application to the created environment
3. During apps creation select very large flavor (we need to fail deployment, in my environment I've chosen very large flavor and deployment has failed because there was not enough resources)
4. Deploy environment
5. After deployment, select 'Delete environment'
Actual result:
Murano environment was deleted, but stack still exists. Moreover, heat-engine log contains a lot of 'stack update' calls instead of 'stack delete'
Changed in mos: | |
assignee: | nobody → MOS Murano (mos-murano) |
Changed in mos: | |
milestone: | none → 6.1 |
Changed in mos: | |
importance: | Undecided → High |
To post a comment you must log in.
My current theory is that during stack creation (that failed) k8s node did not get an ip, therefore it's object does not have an ip address, associated with it in the Object Model in murano env.
When we try to delete the environment an error occurs in destroy() method of k8s node, because it requests an ip address that is not there.
Next due to the bug https:/ /bugs.launchpad .net/murano/ +bug/1456724 the environment is not marked as delete failure, but deleted, while the stack is left as is.
Correct way to fix this would be to fix k8s applications, so that they would not rely on ips being always present.
Fixing https:/ /bugs.launchpad .net/murano/ +bug/1456724 would help, but without fixin k8s apps it would probably create a situation, where an env cannot be deleted, due to error, that happens in destroy() method.