Keystone Charm Stuck in “hook failed: “shared-db-relation-changed”” State

Bug #1831656 reported by Mac Wynkoop
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Keystone Charm
Invalid
Undecided
Unassigned

Bug Description

juju status
Model Controller Cloud/Region Version SLA Timestamp
default netdepot-juju netdepot-maas/us-central 2.6.2 unsupported 20:30:42+02:00

App Version Status Scale Charm Store Rev OS Notes
keystone 9.3.0 error 0/1 keystone jujucharms 301 ubuntu
openstack-dashboard waiting 0 openstack-dashboard jujucharms 288 ubuntu
percona-cluster waiting 0 percona-cluster jujucharms 274 ubuntu
swift-proxy 2.7.1 active 1 swift-proxy jujucharms 83 ubuntu
swift-storage 2.17.1 active 4 swift-storage jujucharms 256 ubuntu

Unit Workload Agent Machine Public address Ports Message
keystone/0 error lost 13 74.81.70.206 5000/tcp hook failed: "shared-db-relation-changed"
swift-proxy/3* active idle 13 74.81.70.206 8080/tcp Unit is ready
swift-storage/1* active idle 3 74.81.70.201 Unit is ready
swift-storage/3 active idle 5 74.81.70.205 Unit is ready
swift-storage/6 active idle 7 74.81.70.203 Unit is ready
swift-storage/10 active idle 12 74.81.70.202 Unit is ready

Machine State DNS Inst id Series AZ Message
3 started 74.81.70.201 nice-pup bionic default Deployed
5 started 74.81.70.205 modest-beetle bionic default Deployed
7 started 74.81.70.203 poetic-mole bionic default Deployed
12 started 74.81.70.202 intent-marten bionic default Deployed
13 started 74.81.70.206 ready-walrus xenial default Deployed

/etc/keystone configs | found in "keystoneconfigs.tgz"
/var/log/keystone and /var/log/juju logs | found in "keystonelogs.tgz" and "jujulogs.tgz"
From percona-cluster
/etc/mysql configs | found in "perconaconfigs.tgz"
/var/log/mysql and /var/log/juju logs | found in "mysqllogs.tgz" and "jujulogs.tgz"

Revision history for this message
Mac Wynkoop (headinclouds) wrote :
Revision history for this message
Steven Parker (sbparke) wrote :

Workaround:

If you are deploying this with a Maas proxy the local network IP range or at least the mysql server ip also has to be in the no-proxy settings. I set no-proxy to localhost and the entire IP range of the undercloud and that worked for me.

Revision history for this message
Steven Parker (sbparke) wrote :

Just for clarity that was in the juju model config
  juju model-config no-proxy=jujucharms.com

You could also include it in --config=config.yaml when you bootstrap
  default-series: xenial
   no-proxy: localhost, <entire ip range>
   apt-http-proxy: http://<ip address>:<port>
   apt-https-proxy: http://<ip address>:<port>
   apt-ftp-proxy: ftp://<ip address>:<port>
   http-proxy: http://<ip address>:<port>
   https-proxy: http://<ip address>:<port>
   ftp-proxy: ftp://<ip address>:<port>

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

The right approach for deployments with proxy servers is to use juju-http-proxy, juju-https-proxy, juju-no-proxy model-configs which result in JUJU_CHARM_HTTP_PROXY, JUJU_CHARM_HTTPS_PROXY and JUJU_CHARM_NO_PROXY environment variables being set in hook environments as opposed to HTTP_PROXY, HTTPS_PROXY and NO_PROXY variables that affect runtimes and various utilities by default.

For more context, see https://github.com/juju/charm-helpers/pull/248

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

I think this bug is invalid as it is an issue with networking and proxying, and not the charm? If this is not the case, then please re-open the bug.

Changed in charm-keystone:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.