Not all keystone units setting relation data
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Keystone Charm |
Fix Released
|
High
|
David Ames |
Bug Description
keystone_
Keystone full functional tests reveal a fairly regular occurrence of api_version relation data not being present at the time that it is checked. This is not limited to a single openstack version. It is observed in multiple releases. I believe this is just a test race.
Further, I'd say that the method (keystone_
01:49:55.615 DEBUG:runner:
01:49:55.615 DEBUG:runner: File "/tmp/bundletes
01:49:55.615 DEBUG:runner: deployment = KeystoneBasicDe
01:49:55.615 DEBUG:runner: File "/tmp/bundletes
01:49:55.615 DEBUG:runner: self._initializ
01:49:55.615 DEBUG:runner: File "/tmp/bundletes
01:49:55.615 DEBUG:runner: self.set_
01:49:55.615 DEBUG:runner: File "/tmp/bundletes
01:49:55.615 DEBUG:runner: u.keystone_
01:49:55.615 DEBUG:runner: File "/tmp/bundletes
01:49:55.615 DEBUG:runner: self.keystone_
01:49:55.615 DEBUG:runner: File "/tmp/bundletes
01:49:55.615 DEBUG:runner: "".format(
01:49:55.615 DEBUG:runner:
01:49:56.515 DEBUG:runner:Exit Code: 1
Changed in charm-keystone: | |
assignee: | nobody → David Ames (thedac) |
Changed in charm-keystone: | |
status: | Fix Committed → Fix Released |
This is a legitimate bug in keystone where sometimes all units have not updated their relation data with client services. In other words, the test was catching a real bug.
Note in the amulet failure that one unit has api_version set and the other does not: /openstack- ci-reports. ubuntu. com/artifacts/ test_charm_ pipeline_ amulet_ full/openstack/ charm-keystone/ 557680/ 9/1328/ consoleText. test_charm_ amulet_ full_2678. txt
https:/
DEBUG:runner: 2018-04- 05 15:07:14,389 keystone_ wait_for_ propagation DEBUG: keystone relation data: {u'service_ protocol' : u'http', u'service_tenant': u'services', u'admin_token': u'ubuntutesting', u'ingress-address': u'172.17.106.30', u'service_ password' : u'NJtFrZ9X59p9g dp5VwJjcK3Gnq8g cV4xkWmp5HrC5P4 CzVMnLBjcdT5LVy 8KFHB5' , u'service_port': u'5000', u'auth_port': u'35357', u'auth_protocol': u'http', u'private-address': u'172.17.106.30', u'egress-subnets': u'172.17. 106.30/ 32', u'service_host': u'172.17.106.30', u'auth_host': u'172.17.106.30', u'service_ username' : u'cinder_cinderv2', u'service_ tenant_ id': u'7cbf0289a5484 9e4a815cee7ce1e 8928', u'api_version': u'2'} 2018-04- 05 15:07:21,182 keystone_ wait_for_ propagation DEBUG: keystone relation data: {u'private- address' : u'172.17.106.36', u'egress-subnets': u'172.17. 106.36/ 32', u'ingress-address': u'172.17.106.36'}
DEBUG:runner:
Here is another example. Keystone 0 and 1 have set the data but keystone 2 has not:
$ get-relation- data.sh identity-service cinder/0 keystone/0 M8ZHgKPknrzjCpY RHZzKpWsJZm6b86 gbN6VqG8YCGwVw3 3yCB f888498be104d2b 37
CMD: juju run --unit cinder/0 'relation-get -r identity-service:4 - keystone/0'
admin_token: ubuntutesting
api_version: "2"
auth_host: 10.5.0.162
auth_port: "35357"
auth_protocol: http
egress-subnets: 10.5.0.162/32
ingress-address: 10.5.0.162
private-address: 10.5.0.162
service_host: 10.5.0.162
service_password: Xh6j8VVW7TXptYy
service_port: "5000"
service_protocol: http
service_tenant: services
service_tenant_id: f4fa132500b44de
service_username: cinderv2_cinderv3
$ get-relation- data.sh identity-service cinder/0 keystone/1 M8ZHgKPknrzjCpY RHZzKpWsJZm6b86 gbN6VqG8YCGwVw3 3yCB f888498be104d2b 37
CMD: juju run --unit cinder/0 'relation-get -r identity-service:4 - keystone/1'
admin_token: ubuntutesting
api_version: "2"
auth_host: 10.5.0.162
auth_port: "35357"
auth_protocol: http
egress-subnets: 10.5.0.152/32
ingress-address: 10.5.0.152
private-address: 10.5.0.152
service_host: 10.5.0.162
service_password: Xh6j8VVW7TXptYy
service_port: "5000"
service_protocol: http
service_tenant: services
service_tenant_id: f4fa132500b44de
service_username: cinderv2_cinderv3
$ get-relation- data.sh identity-service cinder/0 keystone/2
CMD: juju run --unit cinder/0 'relation-get -r identity-service:4 - keystone/2'
egress-subnets: 10.5.0.169/32
ingress-address: 10.5.0.169
private-address: 10.5.0.169