Charm has stuck in error state after upgrading juju from 2.9.34 to 2937
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
New
|
Undecided
|
Unassigned |
Bug Description
Hello,
Juju 2.9.37 is installed in a VM and charms were running in the osm model which has juju version 2.9.34.
installed: 2.9.37 (21315) 96MB classic
ubuntu@
Model Controller Cloud/Region Version SLA Timestamp
osm osm-vca microk8s/localhost 2.9.34 unsupported 17:13:21Z
App Version Status Scale Charm Channel Rev Address Exposed Message
grafana res:image@9b5a5a8 active 1 osm-grafana 12.0/stable 100 10.152.183.234 no ready
kafka active 1 kafka-k8s latest/stable 5 10.152.183.30 no
keystone active 1 osm-keystone latest/stable 5 10.152.183.188 no
lcm opensourcemano/
mariadb mariadb/server:10.3 active 1 charmed-
mon opensourcemano/
mongodb library/
nbi opensourcemano/
ng-ui opensourcemano/
pla opensourcemano/
pol opensourcemano/
prometheus res:backup-
ro opensourcemano/
zookeeper active 1 zookeeper-k8s latest/stable 10 10.152.183.34 no
Unit Workload Agent Address Ports Message
grafana/0* active idle 10.1.47.108 3000/TCP ready
kafka/0* active idle 10.1.47.104
keystone/0* active idle 10.1.47.124
lcm/0* active idle 10.1.47.100 9999/TCP ready
mariadb/0* active idle 10.1.47.106 3306/TCP ready
mon/1* active idle 10.1.47.93 8000/TCP ready
mongodb/0* active idle 10.1.47.101 27017/TCP
nbi/0* active idle 10.1.47.121 9999/TCP ready
ng-ui/0* active idle 10.1.47.120 80/TCP ready
pla/0* active idle 10.1.47.103 9999/TCP ready
pol/0* active idle 10.1.47.126 9999/TCP ready
prometheus/0* active idle 10.1.47.112 9090/TCP ready
ro/0* active idle 10.1.47.91 9090/TCP ready
zookeeper/0* active idle 10.1.47.119
Controller has been upgraded by running:
juju upgrade-controller --agent-version 2.0.37
Then model has been upgraded with command:
juju upgrade-model --agent-version 2.0.37
After upgrade, some of the units are stuck at error state with the following logs:
https:/
ubuntu@
Model Controller Cloud/Region Version SLA Timestamp
osm osm-vca microk8s/localhost 2.9.37 unsupported 17:27:02Z
App Version Status Scale Charm Channel Rev Address Exposed Message
grafana res:image@9b5a5a8 active 1 osm-grafana 12.0/stable 100 10.152.183.234 no ready
kafka waiting 0/1 kafka-k8s latest/stable 5 10.152.183.30 no installing agent
keystone waiting 0/1 osm-keystone latest/stable 5 10.152.183.188 no installing agent
lcm opensourcemano/
mariadb mariadb/server:10.3 active 1 charmed-
mon opensourcemano/
mongodb library/
nbi opensourcemano/
ng-ui opensourcemano/
pla opensourcemano/
pol opensourcemano/
prometheus res:backup-
ro opensourcemano/
zookeeper waiting 0/1 zookeeper-k8s latest/stable 10 10.152.183.34 no installing agent
Unit Workload Agent Address Ports Message
grafana/1* active idle 10.1.47.86 3000/TCP ready
kafka/0 error lost 10.1.47.83 crash loop backoff: back-off 5m0s restarting failed container=
keystone/0 error lost 10.1.47.122 crash loop backoff: back-off 5m0s restarting failed container=
lcm/1* error idle 10.1.47.96 9999/TCP crash loop backoff: back-off 5m0s restarting failed container=lcm pod=lcm-
mariadb/0* active idle 10.1.47.84 3306/TCP ready
mon/2* active idle 10.1.47.116 8000/TCP ready
mongodb/0* active idle 10.1.47.82 27017/TCP
nbi/0* active idle 10.1.47.121 9999/TCP ready
nbi/1 waiting idle 10.1.47.92 9999/TCP waiting for container
ng-ui/1* active idle 10.1.47.90 80/TCP ready
pla/1* active idle 10.1.47.125 9999/TCP ready
pol/1* error idle 10.1.47.77 9999/TCP crash loop backoff: back-off 5m0s restarting failed container=pol pod=pol-
prometheus/0* active idle 10.1.47.123 9090/TCP ready
ro/1* active idle 10.1.47.78 9090/TCP ready
zookeeper/0 error lost 10.1.47.95 container error:
Could you help for this problem ?
How to reproduce the problem:
1. Install OSM:
wget https:/
chmod +x install_osm.sh
./install_osm.sh --charmed
2. Upgrade controller
juju upgrade-controller --agent-version 2.9.37
3. Upgrade model
juju upgrade-model --agent-version 2.9.37
4. Check the status of applications
juju status, juju debug-log
Many Thanks!
Gulsum
I think this is a dupe of bug 1997253
Can you please try with 2.9.38