Looking at the data that was provided in the description and the commentary from Alex in comment #1, I see the same thing. Unfortunately, keystone charm makes a request to the keystone API as part of the relation-changed hook and fails for the last time 1 second before the mysql-router service is actually restarted successfully and working. However, if we look closely - we will also see that the identity-service-relation-changed hooks begin to fail with the following error (both before and after the database connection challenges): --- Before --- 2023-04-01 08:36:00 ERROR unit.keystone/1.juju-log server.go:316 identity-service:134: The call within manager.py failed with the error: 'Unable to establish connection to http://localhost:35337/v3/auth/tokens: HTTPConnectionPool(host='localhost', port=35337): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))'. The call was: path=['list_services'], args=(), kwargs={}, api_version=None --- After --- 2023-04-01 08:41:41 WARNING unit.keystone/1.identity-service-relation-changed logger.go:60 keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://localh ost:35337/v3/auth/tokens: HTTPConnectionPool(host='localhost', port=35337): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2023-04-01 08:41:41 ERROR juju.worker.uniter.operation runhook.go:153 hook "identity-service-relation-changed" (via explicit, bespoke hook script) failed: exit status 1 --- Even later --- 2023-04-01 08:47:31 ERROR unit.keystone/1.juju-log server.go:316 identity-service:134: The call within manager.py failed with the error: 'Unable to establish connection to http://localhost:35337/v3/auth/tokens: HTTPConnectionPool(host='localhost', port=35337): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))'. The call was: path=['list_services'], args=(), kwargs={}, api_version=None However, this call will always fail in the current state. This is because the keystone services are not listening on port 35337, they are instead listening on 35347 per the listening.txt in the unit data from the crashdump. Now, per the local port calculations in determine_api_port (called from add_service_to_keystone) this is forced to a singlenode_mode which automatically decrease the port count by 10. Additionally, the ca is found in the certificates relation so it will also decrease the port count by 10. As such, the charm is looking at interacting with port 35337 while apache2 is listening to port 35347. It's not clear yet *why* the port is not updated correctly. I suspect that when the last service evaluation occurred, it did not have the ca in the certificates relation yet and thus causing it not to go into https port mode. This is unfortunately complicated code to pick the port it listens to (why is it so complicated?) which is going to situations like this. Need to determine why the mismatch in ports.