series-upgrade 'prepare' on non-leader units fails after juju config source=distro
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack RabbitMQ Server Charm |
New
|
Undecided
|
Unassigned |
Bug Description
While following the playbook in the Charm Guide for stateful clustered application series upgrade from Xenial to Bionic on 21.04 latest charms, rabbitmq-server non-leader units refused to allow for upgrade-series prepare because the non-leader units were stuck in config-changed.
Process is documented here:
https:/
Traceback was:
2021-05-10 22:52:15 WARNING config-changed File "/var/lib/
2021-05-10 22:52:15 WARNING config-changed hooks.execute(
2021-05-10 22:52:15 WARNING config-changed File "/var/lib/
2021-05-10 22:52:15 WARNING config-changed self._hooks[
2021-05-10 22:52:15 WARNING config-changed File "/var/lib/
2021-05-10 22:52:15 WARNING config-changed return f(*args, **kwargs)
2021-05-10 22:52:15 WARNING config-changed File "/var/lib/
2021-05-10 22:52:15 WARNING config-changed return f(*args, **kwargs)
2021-05-10 22:52:15 WARNING config-changed File "/var/lib/
2021-05-10 22:52:15 WARNING config-changed update_
2021-05-10 22:52:15 WARNING config-changed File "/var/lib/
2021-05-10 22:52:15 WARNING config-changed hostname, unit, vhosts, user, password = rabbit.
2021-05-10 22:52:15 WARNING config-changed File "/var/lib/
2021-05-10 22:52:15 WARNING config-changed create_user(user, password, ['monitoring'])
2021-05-10 22:52:15 WARNING config-changed File "/var/lib/
2021-05-10 22:52:15 WARNING config-changed exists = user_exists(user)
2021-05-10 22:52:15 WARNING config-changed File "/var/lib/
Workaround is to on-unit patch config_changed in rabbitmq_
https:/
and run:
systemctl restart jujud-unit-
tags: | added: openstack-upgrade |
tags: |
added: series-upgrade removed: openstack-upgrade |
For clarity, I believe
Step 5. Set the value of the source configuration option to ‘distro’:
juju config percona-cluster source=distro
Happening before the other units are upgraded is the cause of the deadlock in the cluster series upgrade process.
My CLI showed an error at the "prepare" step of the non-leader unit that it was not in an idle state.