removing and redeploy of mysql database does not update the relation

Bug #1797229 reported by Wouter van Bommel
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Keystone Charm
Fix Released
Low
Paul Goins

Bug Description

During testing I found that if the mysql database (percona-cluster) is removed from the model. To be added back at later, at least the host entry of the database is not updated in the keystone configuration.

Or to be more precise the apache instance is not restarted, so the old settings are kept.

After a manual 'systemctl restart apache' the keystone no longer complains about not being able to connect to the database.

There is a new problem, and that are missing tables. These don't seem to be populated. But this is probably a different bug worth.

Revision history for this message
Wouter van Bommel (woutervb) wrote :

subscribed: field-medium

tags: added: bootstack
James Page (james-page)
Changed in charm-keystone:
importance: Undecided → Low
Revision history for this message
David Ames (thedac) wrote :

TRIAGE:

Handle removal of shared-db relation.
 When the new relation has old DB.
 When the new relation has an empty DB.

Changed in charm-keystone:
status: New → Triaged
Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

Just confirming what David (thedac) has already suggested:

1. Removing and Re-adding the shared-db relation doesn't do anything to the keystone config.

After the relation has been removed, keystone continues functioning because the config has not be re-written. When the relation is re-added to the existing database it just carries on fine.

2. Removing the database application and re-adding it breaks keystone. Despite new config being written, keystone remains broken:

i.e. it has been updated. However, we have a hook error:

keystone/0* error idle 1 10.5.0.13 5000/tcp hook failed: "shared-db-relation-changed"

The failure also manifests with the openstack client:

openstack service list
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-0d95403d-1722-4d98-892f-43ab9e3d4fa1)

The error in keystone's log is:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, "Access denied for user 'keystone'@'10.5.0.13' (using password: YES)") (Background on this error at: http://sqlalche.me/e/e3q8)

i.e. the auth credentials are not ok.

Restarting the keystone units doesn't help.

Changed in charm-keystone:
status: Triaged → Confirmed
Alvaro Uria (aluria)
tags: added: canonical-bootstack
removed: bootstack
Revision history for this message
Paul Goins (vultaire) wrote :

Is this still an issue?

I set up a trivial LXD model with cs:keystone-291 and cs:percona-cluster-272 running on bionic, one node per app.

Based on what I see:

* On percona removal, it's true that the DB "connection" field in keystone.conf does not get updated.
* On percona and relationship re-add, the DB "connection" field does get updated to point to the new percona instance, thus resolving any breakage there might have been by removing percona. (Assuming everything got restarted, and it's true I haven't dug that far yet.)

Maybe my model is too simple here? I'm unsure, but presently I can't reproduce the issue.

Revision history for this message
Paul Goins (vultaire) wrote :

Using the bionic-rocky model included with charm-keystone, I was able to reproduce this after all. Now investigating...

Revision history for this message
Paul Goins (vultaire) wrote :
Paul Goins (vultaire)
Changed in charm-keystone:
assignee: nobody → Paul Goins (vultaire)
Changed in charm-keystone:
status: Confirmed → In Progress
Revision history for this message
Paul Goins (vultaire) wrote :

Just to add a few notes:

* From what I see, Apache is being restarted on config updates. Maybe that's just in the current upstream; I'm not sure. However, it does seem to be restarting properly on the shared-db-relation-changed hook.

* The above proposed fix doesn't do anything about stale entries being left in keystone.conf after the shared-db relation is broken; however, the config will be correctly updated when the relation is re-established. My guess is that this might generally be good enough, but in case we wish to have the config revert to its original DB connection line, I've opened a separate ticket for that: https://bugs.launchpad.net/charm-keystone/+bug/1817362

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-keystone (master)

Reviewed: https://review.openstack.org/638737
Committed: https://git.openstack.org/cgit/openstack/charm-keystone/commit/?id=a87b1011066650dbfc924501202d7048f6f022bc
Submitter: Zuul
Branch: master

commit a87b1011066650dbfc924501202d7048f6f022bc
Author: Paul Goins <email address hidden>
Date: Fri Feb 22 10:12:32 2019 -0800

    Unset DB init flag on shared-db relation removal

    The keystone charm sets db-initialised to true after initializing the
    database the first time. However, if the database application is
    removed, this flag is not unset.

    This results in breakage on attempts to re-add a shared-db relation
    with a new database application, as the charm will not attempt to
    re-initialize the database prior to doing DB operations.

    This fix simply ensures that we unset this flag prior to finalizing
    removal of the shared-db relation.

    Change-Id: I78ae12fda05ce006939b2d90a3d738bacb815915
    Closes-Bug: #1797229

Changed in charm-keystone:
status: In Progress → Fix Committed
James Page (james-page)
Changed in charm-keystone:
milestone: none → 19.04
David Ames (thedac)
Changed in charm-keystone:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.