Charm has stuck in error state after upgrading juju from 2.9.34 to 2937

Bug #2003007 reported by gulsum atici
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
New
Undecided
Unassigned

Bug Description

Hello,

Juju 2.9.37 is installed in a VM and charms were running in the osm model which has juju version 2.9.34.

installed: 2.9.37 (21315) 96MB classic

ubuntu@osm12:/usr/share/osm-devops/installers$ juju status
Model Controller Cloud/Region Version SLA Timestamp
osm osm-vca microk8s/localhost 2.9.34 unsupported 17:13:21Z

App Version Status Scale Charm Channel Rev Address Exposed Message
grafana res:image@9b5a5a8 active 1 osm-grafana 12.0/stable 100 10.152.183.234 no ready
kafka active 1 kafka-k8s latest/stable 5 10.152.183.30 no
keystone active 1 osm-keystone latest/stable 5 10.152.183.188 no
lcm opensourcemano/lcm:12 active 1 osm-lcm 12.0/stable 113 10.152.183.166 no ready
mariadb mariadb/server:10.3 active 1 charmed-osm-mariadb-k8s stable 35 10.152.183.251 no ready
mon opensourcemano/mon:12 active 1 osm-mon 12.0/stable 106 10.152.183.186 no ready
mongodb library/mongo@11b4907 active 1 mongodb-k8s latest/stable 14 10.152.183.128 no
nbi opensourcemano/nbi:12 active 1 osm-nbi 12.0/stable 1 10.152.183.240 no ready
ng-ui opensourcemano/ng-ui:12 active 1 osm-ng-ui 12.0/stable 1 10.152.183.65 no ready
pla opensourcemano/pla:12 active 1 osm-pla latest/stable 1 10.152.183.141 no ready
pol opensourcemano/pol:12 active 1 osm-pol 12.0/stable 1 10.152.183.215 no ready
prometheus res:backup-image@7996084 active 1 osm-prometheus 12.0/stable 1 10.152.183.33 no ready
ro opensourcemano/ro:12 active 1 osm-ro 12.0/stable 4 10.152.183.178 no ready
zookeeper active 1 zookeeper-k8s latest/stable 10 10.152.183.34 no

Unit Workload Agent Address Ports Message
grafana/0* active idle 10.1.47.108 3000/TCP ready
kafka/0* active idle 10.1.47.104
keystone/0* active idle 10.1.47.124
lcm/0* active idle 10.1.47.100 9999/TCP ready
mariadb/0* active idle 10.1.47.106 3306/TCP ready
mon/1* active idle 10.1.47.93 8000/TCP ready
mongodb/0* active idle 10.1.47.101 27017/TCP
nbi/0* active idle 10.1.47.121 9999/TCP ready
ng-ui/0* active idle 10.1.47.120 80/TCP ready
pla/0* active idle 10.1.47.103 9999/TCP ready
pol/0* active idle 10.1.47.126 9999/TCP ready
prometheus/0* active idle 10.1.47.112 9090/TCP ready
ro/0* active idle 10.1.47.91 9090/TCP ready
zookeeper/0* active idle 10.1.47.119

Controller has been upgraded by running:

juju upgrade-controller --agent-version 2.0.37

Then model has been upgraded with command:

juju upgrade-model --agent-version 2.0.37

After upgrade, some of the units are stuck at error state with the following logs:

https://pastebin.ubuntu.com/p/WYBFcRcdKr/

ubuntu@osm12:/usr/share/osm-devops/installers$ juju status
Model Controller Cloud/Region Version SLA Timestamp
osm osm-vca microk8s/localhost 2.9.37 unsupported 17:27:02Z

App Version Status Scale Charm Channel Rev Address Exposed Message
grafana res:image@9b5a5a8 active 1 osm-grafana 12.0/stable 100 10.152.183.234 no ready
kafka waiting 0/1 kafka-k8s latest/stable 5 10.152.183.30 no installing agent
keystone waiting 0/1 osm-keystone latest/stable 5 10.152.183.188 no installing agent
lcm opensourcemano/lcm:12 waiting 1 osm-lcm 12.0/stable 113 10.152.183.166 no
mariadb mariadb/server:10.3 active 1 charmed-osm-mariadb-k8s stable 35 10.152.183.251 no ready
mon opensourcemano/mon:12 active 1 osm-mon 12.0/stable 106 10.152.183.186 no ready
mongodb library/mongo@11b4907 active 1 mongodb-k8s latest/stable 14 10.152.183.128 no
nbi opensourcemano/nbi:12 waiting 2/1 osm-nbi 12.0/stable 1 10.152.183.240 no
ng-ui opensourcemano/ng-ui:12 active 1 osm-ng-ui 12.0/stable 1 10.152.183.65 no ready
pla opensourcemano/pla:12 active 1 osm-pla latest/stable 1 10.152.183.141 no ready
pol opensourcemano/pol:12 waiting 1 osm-pol 12.0/stable 1 10.152.183.215 no
prometheus res:backup-image@7996084 active 1 osm-prometheus 12.0/stable 1 10.152.183.33 no ready
ro opensourcemano/ro:12 active 1 osm-ro 12.0/stable 4 10.152.183.178 no ready
zookeeper waiting 0/1 zookeeper-k8s latest/stable 10 10.152.183.34 no installing agent

Unit Workload Agent Address Ports Message
grafana/1* active idle 10.1.47.86 3000/TCP ready
kafka/0 error lost 10.1.47.83 crash loop backoff: back-off 5m0s restarting failed container=charm-init pod=kafka-0_osm(2b65e1aa-2878-49d6-9730-396d65cc6631)
keystone/0 error lost 10.1.47.122 crash loop backoff: back-off 5m0s restarting failed container=charm-init pod=keystone-0_osm(23991780-fd5c-43ee-9141-3f772eb85e54)
lcm/1* error idle 10.1.47.96 9999/TCP crash loop backoff: back-off 5m0s restarting failed container=lcm pod=lcm-559f79cfbb-blcsj_osm(1bc7397b-28b7-448a-b216-901f52336f78)
mariadb/0* active idle 10.1.47.84 3306/TCP ready
mon/2* active idle 10.1.47.116 8000/TCP ready
mongodb/0* active idle 10.1.47.82 27017/TCP
nbi/0* active idle 10.1.47.121 9999/TCP ready
nbi/1 waiting idle 10.1.47.92 9999/TCP waiting for container
ng-ui/1* active idle 10.1.47.90 80/TCP ready
pla/1* active idle 10.1.47.125 9999/TCP ready
pol/1* error idle 10.1.47.77 9999/TCP crash loop backoff: back-off 5m0s restarting failed container=pol pod=pol-799dbfc869-qjv9h_osm(8b048123-9354-4cda-b4c0-6875ee90aa48)
prometheus/0* active idle 10.1.47.123 9090/TCP ready
ro/1* active idle 10.1.47.78 9090/TCP ready
zookeeper/0 error lost 10.1.47.95 container error:

Could you help for this problem ?

How to reproduce the problem:

1. Install OSM:
wget https://osm-download.etsi.org/ftp/osm-13.0-thirteen/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh --charmed

2. Upgrade controller
juju upgrade-controller --agent-version 2.9.37

3. Upgrade model

juju upgrade-model --agent-version 2.9.37

4. Check the status of applications
juju status, juju debug-log

Many Thanks!
Gulsum

gulsum atici (gatici)
description: updated
Revision history for this message
Ian Booth (wallyworld) wrote :

I think this is a dupe of bug 1997253

Can you please try with 2.9.38

Revision history for this message
gulsum atici (gatici) wrote :

I have tested the upgrade from 2.9.34 to 2.9.38. The problem did not appear. It's been solved.

ubuntu@osm12:/usr/share/osm-devops/installers$ juju status
Model Controller Cloud/Region Version SLA Timestamp
osm osm-vca microk8s/localhost 2.9.38 unsupported 15:28:41Z

App Version Status Scale Charm Channel Rev Address Exposed Message
grafana res:image@9b5a5a8 active 1 osm-grafana 12.0/stable 100 10.152.183.103 no ready
kafka active 1 kafka-k8s latest/stable 5 10.152.183.67 no
keystone active 1 osm-keystone latest/stable 5 10.152.183.13 no
lcm opensourcemano/lcm:12 active 1 osm-lcm 12.0/stable 113 10.152.183.23 no ready
mariadb mariadb/server:10.3 active 1 charmed-osm-mariadb-k8s stable 35 10.152.183.126 no ready
mon opensourcemano/mon:12 active 1 osm-mon 12.0/stable 106 10.152.183.191 no ready
mongodb library/mongo@11b4907 active 1 mongodb-k8s latest/stable 14 10.152.183.198 no
nbi opensourcemano/nbi:12 active 1 osm-nbi 12.0/stable 1 10.152.183.195 no ready
ng-ui opensourcemano/ng-ui:12 active 1 osm-ng-ui 12.0/stable 1 10.152.183.231 no ready
pla opensourcemano/pla:12 active 1 osm-pla latest/stable 1 10.152.183.106 no ready
pol opensourcemano/pol:12 active 1 osm-pol 12.0/stable 1 10.152.183.110 no ready
prometheus res:backup-image@7996084 active 1 osm-prometheus 12.0/stable 1 10.152.183.30 no ready
ro opensourcemano/ro:12 active 1 osm-ro 12.0/stable 4 10.152.183.66 no ready
zookeeper active 1 zookeeper-k8s latest/stable 10 10.152.183.87 no

Unit Workload Agent Address Ports Message
grafana/1* active idle 10.1.47.123 3000/TCP ready
kafka/0* active idle 10.1.47.77
keystone/0* active idle 10.1.47.90
lcm/1* active idle 10.1.47.78 9999/TCP ready
mariadb/0* active idle 10.1.47.88 3306/TCP ready
mon/2* active idle 10.1.47.84 8000/TCP ready
mongodb/0* active idle 10.1.47.111 27017/TCP
nbi/1* active idle 10.1.47.75 9999/TCP ready
ng-ui/1* active idle 10.1.47.87 80/TCP ready
pla/1* active idle 10.1.47.85 9999/TCP ready
pol/1* active idle 10.1.47.113 9999/TCP ready
prometheus/0* active idle 10.1.47.121 9090/TCP ready
ro/1* active idle 10.1.47.82 9090/TCP ready
zookeeper/0* active idle 10.1.47.92

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.