Race during Keystone deploy (fernet)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Identity (keystone) |
Invalid
|
Undecided
|
Unassigned | ||
kolla-ansible |
In Progress
|
Medium
|
Maciej Kucia |
Bug Description
RedHat 7.6 OpenStack Ocata
Custom build Docker images using binary type.
Keystone configured to use fernet tokens.
When keystone container is started it expects directory and tokens to be present.
This is checked by the following code https:/
In some rare scenarios, keystone container fails with
2019-05-31 17:26:39.620011 File "/usr/lib/
2019-05-31 17:26:39.620106 'Fernet keys.') % subs)
2019-05-31 17:26:39.620126 SystemExit: /etc/keystone/
When inspecting directory, keys are there
(keystone)
total 12
drwxrwx---. 2 keystone keystone 33 May 31 17:26 .
drwxr-x---. 1 root keystone 61 May 31 17:26 ..
rw------. 1 keystone keystone 44 May 31 17:26 0
rw------. 1 keystone keystone 44 May 31 17:26 1
rw------. 1 keystone keystone 44 May 31 17:26 2
Please note that the files creation time is the same as error message time (17:26).
Upon inspection of the ansible/
init_fernet.yml task is executed after flush_handlers. When handlers are run, containers are created or restarted.
The obvious option would be to move init_fernet before handlers, but this task does require keystone_ssh and keystone_fernet to be up and running.
The solutions could include:
- Changes in keystone itself to retry initialization as long as the keys are missing
- Changes in keystone to fail in a way that the container will restart
- Changes in kolla-ansible to enforce fernet init before keystone container starts.
The bug is found on Ocata but upon Ansible manifests inspection it could happen on master as well.
Workaround:
Restart Keystone container.
Changed in kolla-ansible: | |
importance: | Undecided → Medium |
Fix proposed to branch: master /review. opendev. org/665326
Review: https:/