[percona cluster -> mysql-innodb-cluster upgrade] "Vault cannot authorize approle" after unseal and change of database.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Charm Guide |
Triaged
|
Medium
|
Unassigned | ||
vault-charm |
Triaged
|
Medium
|
Unassigned |
Bug Description
cs:vault-46, 3 units.
Units were upgraded to Focal from Bionic.
We removed the relation to mysql (bionic), and added a relation to mysql-router, per https:/
I then ran 'pause' on one unit, and 'resume'. On running resume, I logged into the unit and unsealed it.
The vault unit status is 'blocked', 'idle', "Vault cannot authorize approle"
The unit log contains the following traceback:
2021-10-05 03:35:41 DEBUG update-status active
2021-10-05 03:35:41 DEBUG worker.uniter.jujuc server.go:204 running hook tool "application-
2021-10-05 03:35:41 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:35:41 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:35:41 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:35:43 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:35:45 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:35:49 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:35:57 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:36:13 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:36:45 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:37:45 DEBUG worker.uniter.jujuc server.go:204 running hook tool "leader-get"
2021-10-05 03:37:45 DEBUG worker.uniter.jujuc server.go:204 running hook tool "juju-log"
2021-10-05 03:37:45 WARNING juju-log InternalServerE
2021-10-05 03:37:45 DEBUG worker.uniter.jujuc server.go:204 running hook tool "juju-log"
2021-10-05 03:37:45 ERROR juju-log Traceback (most recent call last):
File "/var/lib/
vault.
File "/var/lib/
return self(f, *args, **kw)
File "/var/lib/
do = self.iter(
File "/var/lib/
raise retry_exc.reraise()
File "/var/lib/
raise self.last_
File "/usr/lib/
return self.__get_result()
File "/usr/lib/
raise self._exception
File "/var/lib/
result = fn(*args, **kwargs)
File "/var/lib/
client.
File "/var/lib/
return self.auth(
File "/var/lib/
return self._adapter.auth(
File "/var/lib/
response = self.post(url, **kwargs).json()
File "/var/lib/
return self.request(
File "/var/lib/
utils.
File "/var/lib/
raise exceptions.
hvac.exceptions
tags: | added: sts |
Looks like the units are attempting to connect to the 'old' mysql host, rather than mysql-router, until Vault is restarted. Restarting the first unit resulted in that error, but when the second unit was restarted, the cluster re-formed and hooks started to run again.
This is, therefore an issue that: /docs.openstack .org/project- deploy- guide/charm- deployment- guide/latest/ percona- series- upgrade- to-focal. html with some notes on how to properly restart.
- Vault needs to be restarted after changing the db relation, but is not (and, naturally, needs unsealing).
- the error message supplied by the charm doesn't actually tell us that
- vault needs to be added to the guide at https:/