Keystone leader spends too much time updating endpoints
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Keystone Charm |
Triaged
|
Medium
|
Unassigned |
Bug Description
This isn't a bug per se, but it's a pain point which has eaten enough of my time that I want to make sure a bug is filed for this.
In response to various hooks, I often see the keystone leader spend significant time going through and updating each of the endpoints, or at least, that's what "juju status" indicates is happening.
Here's a concrete case: Today, I needed to perform a Juju controller upgrade from Juju 2.9.22 to Juju 2.9.37 today. This appears to have triggered a chain of hooks on the keystone unit which took over an hour to complete. I've attached partial output of "juju show-status-log" for your reference.
Exact charm in use here is cs:keystone-323. This particular cloud is a bionic/stein cloud, so older charms are in use.
I do see a similar bug which was filed some time ago (https:/
Not seeing other bugs regarding this, I'm suspecting this may still be an issue in the current truck, although admittedly I don't have hard evidence for this at present. It may be premature but I wanted to file this bug to ensure I don't forget to do so.
Best Regards,
Paul Goins
I agree that it spends too much time; install, charm-upgrade (refresh), config-change all trigger the endpoint updating code.
The main issue is that the endpoint code does round-trips to the the keystone API which then talks to the database, but it does it in an inefficient way. My first refactor did actually speed it up quite a bit, but there is still work to be done. However, I guess it is not a high priority, as it 'works', despite being slow. It would be an interesting exercise in understanding the code, though, and the tests are fairly rigorous, so it is amenable to refactoring; thus I will assign it as a 'good-first-bug'.