[19.04] grants for new shared-db clients are not created when a deployment is scaled from non-HA to HA

Bug #1827458 reported by Dmitrii Shcherbakov
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Percona Cluster Charm
Triaged
Medium
Unassigned

Bug Description

When a deployment is scaled out from the non-HA configuration (for all API and DB units) to the HA configuration (2-unit in my case which uses "two_node: 1" mode for corosync) handling of shared-db-relation-changed by charm-percona-cluster somehow misses adding relevant grants for new client units.

1) non-HA bundle;
https://pastebin.canonical.com/p/bhGKsbwxk6/

2) HA bundle:
https://pastebin.canonical.com/p/Rfj3dxhktB/

juju deploy bundle.yaml
# let it settle
juju deploy --map-machines=existing bundle-ha.yaml
# observe failures

The problem manifests itself when either an API service tries to validate a keystone token in keystonemiddlewaere and accesses a keystone backend that does not have a grant on the database or when a keystone unit is accessed directly (e.g. if you need to issue a token).

For example, keystone has 2 units but grants are present only for a single one:

mysql> select * from INFORMATION_SCHEMA.SCHEMA_PRIVILEGES where GRANTEE LIKE '%keystone%';
+---------------------------+---------------+--------------+-------------------------+--------------+
| GRANTEE | TABLE_CATALOG | TABLE_SCHEMA | PRIVILEGE_TYPE | IS_GRANTABLE |
+---------------------------+---------------+--------------+-------------------------+--------------+
| 'keystone'@'10.232.7.110' | def | keystone | SELECT | NO |
| 'keystone'@'10.232.7.110' | def | keystone | INSERT | NO |
| 'keystone'@'10.232.7.110' | def | keystone | UPDATE | NO |
| 'keystone'@'10.232.7.110' | def | keystone | DELETE | NO |
| 'keystone'@'10.232.7.110' | def | keystone | CREATE | NO |
| 'keystone'@'10.232.7.110' | def | keystone | DROP | NO |
| 'keystone'@'10.232.7.110' | def | keystone | REFERENCES | NO |
| 'keystone'@'10.232.7.110' | def | keystone | INDEX | NO |
| 'keystone'@'10.232.7.110' | def | keystone | ALTER | NO |
| 'keystone'@'10.232.7.110' | def | keystone | CREATE TEMPORARY TABLES | NO |
| 'keystone'@'10.232.7.110' | def | keystone | LOCK TABLES | NO |
| 'keystone'@'10.232.7.110' | def | keystone | EXECUTE | NO |
| 'keystone'@'10.232.7.110' | def | keystone | CREATE VIEW | NO |
| 'keystone'@'10.232.7.110' | def | keystone | SHOW VIEW | NO |
| 'keystone'@'10.232.7.110' | def | keystone | CREATE ROUTINE | NO |
| 'keystone'@'10.232.7.110' | def | keystone | ALTER ROUTINE | NO |
| 'keystone'@'10.232.7.110' | def | keystone | EVENT | NO |
| 'keystone'@'10.232.7.110' | def | keystone | TRIGGER | NO |
+---------------------------+---------------+--------------+-------------------------+--------------+
18 rows in set (0.00 sec)

juju run --application keystone 'network-get shared-db'
- Stdout: |
    bind-addresses:
    - macaddress: 00:16:3e:62:d2:ae
      interfacename: eth1
      addresses:
      - hostname: ""
        address: 10.232.7.104
        cidr: 10.232.0.0/21
    egress-subnets:
    - 10.232.7.104/32
    ingress-addresses:
    - 10.232.7.104
  UnitId: keystone/1
- Stdout: |
    bind-addresses:
    - macaddress: 00:16:3e:1f:84:8f
      interfacename: eth1
      addresses:
      - hostname: ""
        address: 10.232.7.110
        cidr: 10.232.0.0/21
    egress-subnets:
    - 10.232.7.110/32
    ingress-addresses:
    - 10.232.7.110
  UnitId: keystone/0

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

Setting to 'incomplete' as min-cluster-size was not changed from 0 to 2 in the HA bundle which is likely the reason why this happened.

Changed in charm-percona-cluster:
status: New → Incomplete
Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

Verified with changing min-cluster-size from 0 to 2:

# min-cluster-size=0
juju deploy bundle.yaml
# let it settle
juju deploy --map-machines=existing bundle-ha.yaml
# min-cluster-size=2
# observe failures

Changed in charm-percona-cluster:
status: Incomplete → Triaged
importance: Undecided → Medium
tags: added: scaleout
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.