charm status stays "Charm configuration in progress" forever
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Cinder HPE 3PAR Charm |
New
|
Undecided
|
Unassigned |
Bug Description
I'm running charm cinder-three-par, channel yoga/stable, release 13. It's related to cinder-volume charm channel yoga/stable release 582. Cinder-volume is running separate from main cinder services so that it can be run on compute-nodes that have FC HBAs (controllers do not have them). It's yoga on focal.
The charm configuration is as follows:
=======
applications:
# extra cinder volumes on compute nodes because of 3par FC access
cinder-volume:
bindings:
'': oam-space
admin: public-space
amqp: internal-space
certificates: internal-space
identity-
internal: internal-space
public: public-space
shared-db: internal-space
channel: yoga/stable
charm: cinder
constraints: spaces=
options:
block-device: None
glance-
openstack
region: *openstack-region
use-
enabled-
num_units: 6
to: # trying to spread over racks
- 2001
- 2010
- 2020
- 2030
- 2040
- 2050
cinder-
channel: yoga/stable
charm: cinder-three-par
options:
driver-type: fc
san-ip: 10.x.x.x
san-login: redacted
san-password: redacted
hpe3par-
hpe3par-
hpe3par-
hpe3par-cpg: Xxxx_CPG03
use-
enforce-
volume-
relations:
- ['cinder-
=======
The configuration does work, the driver connects to the storage and reports status "up" to cinder services. But the charm status stays like "Charm configuration in progress" even after everything is already ok.
The order of events is as follows:
- Charms cinder-volume and cinder-three-par finish installing
- Both execute for some time
- Cinder-three-par *does* pass configuration to cinder-volume, which can be seen in main cinder.conf file
- Cinder-volume blocks with "services not running that should be: cinder-volume" because of a different bug (LP#1987009)
- I restart the cinder-volumes services as the suggested workaround in that bug report, services do start normally and charm status become active
- At some point, not sure if before or after that, status for charm cinder-three-par also becomes active
- at this point, everything is working, cinder-volume is running, 3par driver is configured, backend status in "openstack volume services list" appears "up"
- a few minutes pass by, I think one cycle of update-status (~about 5 minutes), status of charm cinder-three-par becomes "waiting" with message "Charm configuration in progress" and stays like that forever
cinder-
cinder-
cinder-
cinder-
cinder-
cinder-
So, everything seems fine until that last moment where the charm decides again it's not configured yet. To be clear: driver continues to work, the only issue is charm status.
description: | updated |
description: | updated |
This is the *complete* log in the cinder-three-par unit for a fresh deployment:
ubuntu@ b01u01- cc:/var/ log/juju$ cat unit-cinder- three-par- backend03- 0.log three-par- backend03/ 0" apicaller connect.go:163 [2f1619] "unit-cinder- three-par- backend03- 0" successfully connected to "10.1.8.53:17070" apicaller connect.go:260 [2f1619] password changed for "unit-cinder- three-par- backend03- 0" apicaller connect.go:163 [2f1619] "unit-cinder- three-par- backend03- 0" successfully connected to "10.1.8.53:17070" migrationminion worker.go:142 migration phase is now: NONE upgrader upgrader.go:216 no waiter, upgrader is done three-par- backend03/ 0" started uniter. charm bundles.go:78 downloading ch:amd64/ focal/cinder- three-par- 13 from API server uniter. storage resolver.go:127 initial storage attachments ready meterstatus runner.go:93 ran "meter- status- changed" hook (via hook dispatching script: dispatch) three-par- backend03/ 0.juju- log server.go:316 Running legacy hooks/install. three-par- backend03/ 0.juju- log server.go:316 Installing ['python3- 3parclient' , 'sysfsutils'] with options: ['--option= Dpkg::Options: :=--force- confold' ] three-par- backend03/ 0.juju- log server.go:316 Updating status three-par- backend03/ 0.juju- log server.go:316 Status updated uniter. operation runhook.go:146 ran "install" hook (via hook dispatching script: dispatch) uniter. operation runhook.go:146 ran "storage- backend- relation- created" hook (via hook dispatching script: dispatch) uniter. operation runhook.go:146 ran "leader-elected" hook (via hook dispatching script: dispatch) uniter. operation runhook.go:146 ran "config-changed" hook (via hook dispatching script: dispatch) three-par- backend03/ 0.juju- log server.go:316 Running legacy hooks/start. uniter. operation runhook.go:146 ran "start" hook (via hook dispatching script: dispatch) uniter. operation runhook.g...
2023-01-23 20:10:47 INFO juju unit_agent.go:289 Starting unit workers for "cinder-
2023-01-23 20:10:47 INFO juju.worker.
2023-01-23 20:10:47 INFO juju.worker.
2023-01-23 20:10:47 INFO juju.worker.
2023-01-23 20:10:47 INFO juju.worker.
2023-01-23 20:10:47 INFO juju.worker.logger logger.go:120 logger worker started
2023-01-23 20:10:47 INFO juju.worker.
2023-01-23 20:10:47 INFO juju.worker.uniter uniter.go:326 unit "cinder-
2023-01-23 20:10:47 INFO juju.worker.uniter uniter.go:631 resuming charm install
2023-01-23 20:10:47 INFO juju.worker.
2023-01-23 20:10:48 INFO juju.worker.uniter uniter.go:344 hooks are retried true
2023-01-23 20:10:48 INFO juju.worker.
2023-01-23 20:10:48 INFO juju.worker.uniter resolver.go:149 found queued "install" hook
2023-01-23 20:17:17 INFO juju.worker.
2023-01-23 20:17:23 INFO unit.cinder-
2023-01-23 20:17:26 INFO unit.cinder-
2023-01-23 20:17:29 INFO unit.cinder-
2023-01-23 20:17:29 INFO unit.cinder-
2023-01-23 20:17:30 INFO juju.worker.
2023-01-23 20:22:47 INFO juju.worker.
2023-01-23 20:22:47 INFO juju.worker.uniter resolver.go:149 found queued "leader-elected" hook
2023-01-23 20:23:01 INFO juju.worker.
2023-01-23 20:23:57 INFO juju.worker.
2023-01-23 20:23:57 INFO juju.worker.uniter resolver.go:149 found queued "start" hook
2023-01-23 20:24:26 INFO unit.cinder-
2023-01-23 20:24:27 INFO juju.worker.
2023-01-23 20:25:59 INFO juju.worker.