The application could not be removed if it is not active.

Bug #2045112 reported by gulsum atici
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Confirmed
Undecided
Unassigned

Bug Description

Hello,

We decided to remove and redeploy an application which is stuck at waiting status.
Unfortunately, the charm could not be removed. How to remove the application which are not in active status ?

We run following commands in order after waiting reasonable period of time.

- remove-application (no effect)
- remove-application --force (no effect)
- remove-application --force no-wait (removed but left the resources in K8s namespace)
- Clean the Kubernetes environment manually

How to reproduce the issue ?

$ juju status

Model Controller Cloud/Region Version SLA Timestamp
core microk8s-localhost microk8s/localhost 3.1.6 unsupported 10:44:09+03:00

App Version Status Scale Charm Channel Rev Address Exposed Message
sdcore-upf waiting 1/0 sdcore-upf 6 10.152.183.75 no installing agent

Unit Workload Agent Address Ports Message
sdcore-upf/0* waiting executing 10.1.146.14 (config-changed) Waiting for bessd service to run

$ juju remove-application sdcore-upf

will remove application sdcore-upf
- will remove unit sdcore-upf/0
- will detach storage config/33
- will detach storage shared-app/34

$ juju status

Model Controller Cloud/Region Version SLA Timestamp
core microk8s-localhost microk8s/localhost 3.1.6 unsupported 10:44:09+03:00

App Version Status Scale Charm Channel Rev Address Exposed Message
sdcore-upf waiting 1/0 sdcore-upf 6 10.152.183.75 no installing agent

Unit Workload Agent Address Ports Message
sdcore-upf/0* waiting executing 10.1.146.14 (config-changed) Waiting for bessd service to run

$ juju remove-application sdcore-upf --force

will remove application sdcore-upf
- will remove unit sdcore-upf/0
- will detach storage config/33
- will detach storage shared-app/34

$ juju status

Model Controller Cloud/Region Version SLA Timestamp
core microk8s-localhost microk8s/localhost 3.1.6 unsupported 10:44:09+03:00

App Version Status Scale Charm Channel Rev Address Exposed Message
sdcore-upf waiting 1/0 sdcore-upf 6 10.152.183.75 no installing agent

Unit Workload Agent Address Ports Message
sdcore-upf/0* waiting executing 10.1.146.14 (config-changed) Waiting for bessd service to run

$ juju remove-application sdcore-upf --force --no-wait

$ kubectl get all -A | grep -i upf

core service/sdcore-upf-external LoadBalancer 10.152.183.205 10.0.0.4 8805:31218/UDP 12h

juju debug log during removal process:

unit-sdcore-upf-0: 10:56:11 INFO unit.sdcore-upf/0.juju-log Failed running configuration for bess
unit-nrf-0: 10:56:11 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)
unit-mongodb-k8s-0: 10:56:11 ERROR unit.mongodb-k8s/0.juju-log Failed to get pbm status.
unit-mongodb-k8s-0: 10:56:11 WARNING unit.mongodb-k8s/0.juju-log No relation: certificates
unit-mongodb-k8s-0: 10:56:12 INFO juju.worker.uniter.operation ran "update-status" hook (via hook dispatching script: dispatch)
unit-sdcore-upf-0: 10:56:13 INFO unit.sdcore-upf/0.juju-log Starting configuration of the `bessd` service
unit-sdcore-upf-0: 10:56:14 ERROR unit.sdcore-upf/0.juju-log *** Error: BESS daemon not connected

unit-sdcore-upf-0: 10:56:14 ERROR unit.sdcore-upf/0.juju-log Command failed: run /opt/bess/bessctl/conf/up4

unit-sdcore-upf-0: 10:56:14 INFO unit.sdcore-upf/0.juju-log Failed running configuration for bess
unit-sdcore-upf-0: 10:56:16 INFO unit.sdcore-upf/0.juju-log Starting configuration of the `bessd` service
unit-sdcore-upf-0: 10:56:17 ERROR unit.sdcore-upf/0.juju-log *** Error: BESS daemon not connected

unit-sdcore-upf-0: 10:56:17 ERROR unit.sdcore-upf/0.juju-log Command failed: run /opt/bess/bessctl/conf/up4

unit-sdcore-upf-0: 10:56:17 INFO unit.sdcore-upf/0.juju-log Failed running configuration for bess
unit-sdcore-upf-0: 10:56:19 INFO unit.sdcore-upf/0.juju-log Starting configuration of the `bessd` service
unit-sdcore-upf-0: 10:56:20 ERROR unit.sdcore-upf/0.juju-log *** Error: BESS daemon not connected

unit-sdcore-upf-0: 10:56:20 ERROR unit.sdcore-upf/0.juju-log Command failed: run /opt/bess/bessctl/conf/up4

unit-sdcore-upf-0: 10:56:20 INFO unit.sdcore-upf/0.juju-log Failed running configuration for bess
unit-sdcore-upf-0: 10:56:22 INFO unit.sdcore-upf/0.juju-log Starting configuration of the `bessd` service
unit-sdcore-upf-0: 10:56:23 ERROR unit.sdcore-upf/0.juju-log *** Error: BESS daemon not connected

unit-sdcore-upf-0: 10:56:23 ERROR unit.sdcore-upf/0.juju-log Command failed: run /opt/bess/bessctl/conf/up4

unit-sdcore-upf-0: 10:56:23 INFO unit.sdcore-upf/0.juju-log Failed running configuration for bess
unit-sdcore-upf-0: 10:56:25 INFO unit.sdcore-upf/0.juju-log Starting configuration of the `bessd` service
unit-sdcore-upf-0: 10:56:26 ERROR unit.sdcore-upf/0.juju-log *** Error: BESS daemon not connected

unit-sdcore-upf-0: 10:56:26 ERROR unit.sdcore-upf/0.juju-log Command failed: run /opt/bess/bessctl/conf/up4

unit-sdcore-upf-0: 10:56:26 INFO unit.sdcore-upf/0.juju-log Failed running configuration for bess
unit-sdcore-upf-0: 10:56:28 INFO unit.sdcore-upf/0.juju-log Starting configuration of the `bessd` service
unit-sdcore-upf-0: 10:56:29 ERROR unit.sdcore-upf/0.juju-log *** Error: BESS daemon not connected

unit-sdcore-upf-0: 10:56:29 ERROR unit.sdcore-upf/0.juju-log Command failed: run /opt/bess/bessctl/conf/up4

unit-sdcore-upf-0: 10:56:29 INFO unit.sdcore-upf/0.juju-log Failed running configuration for bess
controller-0: 10:56:29 INFO juju.worker.caasapplicationprovisioner.sdcore-upf scaling application "sdcore-upf" to desired scale 0
controller-0: 10:56:29 INFO juju.worker.caasapplicationprovisioner.sdcore-upf scaling application "sdcore-upf" to desired scale 0
controller-0: 10:56:33 INFO juju.worker.caasapplicationprovisioner.sdcore-upf scaling application "sdcore-upf" to desired scale 0
controller-0: 10:56:33 INFO juju.worker.caasapplicationprovisioner.sdcore-upf removing dead unit sdcore-upf/0
controller-0: 10:56:34 INFO juju.worker.caasapplicationprovisioner.sdcore-upf scaling application "sdcore-upf" to desired scale 0
controller-0: 10:56:38 INFO juju.worker.caasapplicationprovisioner.runner stopped "sdcore-upf", err: <nil>

gulsum atici (gatici)
description: updated
description: updated
gulsum atici (gatici)
description: updated
Revision history for this message
Ian Booth (wallyworld) wrote :

This might be fixed by this PR
https://github.com/juju/juju/pull/16652

Revision history for this message
Harry Pidcock (hpidcock) wrote :

are we able to get the logs from the controller model please

Harry Pidcock (hpidcock)
Changed in juju:
status: New → Incomplete
Revision history for this message
gulsum atici (gatici) wrote (last edit ):

The controller logs are provided in below link:

https://pastebin.ubuntu.com/p/K8J5mhWwQv/

There is another finding, the charm could not be removed although application is in active status but statefulset is not deployed for some reason.

juju status
Model Controller Cloud/Region Version SLA Timestamp
core myk8scloud-localhost myk8scloud/localhost 3.1.6 unsupported 09:58:52+03:00

App Version Status Scale Charm Channel Rev Address Exposed Message
sdcore-upf active 1/0 sdcore-upf 0 10.152.183.236 no

Unit Workload Agent Address Ports Message
sdcore-upf/0* active executing 10.1.146.48

$ kubectl get all -n core | grep -i upf
pod/sdcore-upf-0 0/3 Running 151 (84s ago) 4h26m
service/sdcore-upf-endpoints ClusterIP None <none> <none> 17h
service/sdcore-upf-external LoadBalancer 10.152.183.112 10.0.0.3 8805:31247/UDP 17h
service/sdcore-upf ClusterIP 10.152.183.236 <none> 65535/TCP,8080/TCP 17h
statefulset.apps/sdcore-upf 0/1 17h

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for Canonical Juju because there has been no activity for 60 days.]

Changed in juju:
status: Incomplete → Expired
gulsum atici (gatici)
Changed in juju:
status: Expired → Incomplete
status: Incomplete → In Progress
status: In Progress → Confirmed
Revision history for this message
gulsum atici (gatici) wrote :

This bug is tested with Juju 3.4.3 and it still exists.

If application stuck in a status such as unknown, this application could not be removed.

```
 juju status
Model Controller Cloud/Region Version SLA Timestamp
testupf k8scloud-localhost k8scloud/localhost 3.4.3 unsupported 14:13:06+03:00

App Version Status Scale Charm Channel Rev Address Exposed Message
upf waiting 0 sdcore-upf-k8s 1.5/edge 346 10.152.183.156 no waiting for units to settle down

Unit Workload Agent Address Ports Message
upf/0 unknown lost agent lost, see 'juju show-status-log upf/0'

```

When we force to remove it, it is removed by leaving some resources in the namespace.

```
 juju remove-application upf --force
WARNING This command will perform the following actions:
will remove application upf
- will remove unit upf/0
- will detach storage config/0
- will detach storage shared-app/1

Continue [y/N]? y
gatici@gaticipc:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
testupf k8scloud-localhost k8scloud/localhost 3.4.3 unsupported 14:16:00+03:00

App Version Status Scale Charm Channel Rev Address Exposed Message
upf waiting 0 sdcore-upf-k8s 1.5/edge 346 10.152.183.156 no waiting for units to settle down

Unit Workload Agent Address Ports Message
upf/0 unknown lost agent lost, see 'juju show-status-log upf/0'

juju status
Model Controller Cloud/Region Version SLA Timestamp
testupf k8scloud-localhost k8scloud/localhost 3.4.3 unsupported 14:18:05+03:00

Model "admin/testupf" is empty.
gatici@gaticipc:~$ kubectl get net-attach-def -n testupf
NAME AGE
access-net 71m
core-net 30m
```

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.