access-network is ignored
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
percona-cluster (Juju Charms Collection) |
Fix Released
|
High
|
James Page |
Bug Description
When adding relation between glance (also seen with nova-cloud-
However, IPs that are granted are random. Percona grants access to these IPs without specific order and usually misses the right IP. For example, glance/0 unit had IPs: 10.0.1.129 and 10.0.4.135. But if you look at the log you can see that access for ip 10.0.4.135 was given only at 09:02:24 (after multiple remove/add relation exercises). Access for 10.0.1.129 was provided at 08:37:58 (on first attempt at creating a relation).
Therefore, access-network is not working properly. Not only that IPs from that network are not granted access, but also IPs that should not have access are given access. In this case percona should have granted access only to IPs from 10.0.4.0/24 network, and not from 10.0.1.0/24 network. If you look further in the log, you'll see the same problem with nova-cloud-
This is obvious only because in this case in both cases glance/0 and nova-cloud-
I'm marking this High bug because of the impact scale of this bug.
Related branches
- OpenStack Charmers: Pending requested
-
Diff: 56 lines (+22/-3)1 file modifiedhooks/percona_hooks.py (+22/-3)
- charmers: Pending requested
-
Diff: 56 lines (+22/-3)1 file modifiedhooks/percona_hooks.py (+22/-3)
tags: | added: stable |
tags: | added: openstack |
tags: | added: backport-potential |
tags: | added: sts |
Changed in percona-cluster (Juju Charms Collection): | |
status: | In Progress → Fix Committed |
Changed in percona-cluster (Juju Charms Collection): | |
status: | Fix Committed → Fix Released |
This issue is specific to:
percona-cluster configured using access-network
related service with multiple remote units
its caused by the fact that currently, the shared-db relation uses a multi-hook execution conversation to negotiate access over the correct 'access-network' configuration; the problem occurs when multiple units are in the service related, as it looks like the leader unit does not complete negotiation until after initial access (over 'private-address') has been granted, the follower units have negotiated correct access and the pxc charm has switched its db_host presented value over to a IP on the access-network, resulting in the leader trying to complete operations without the appropriate grants in place.
My proposed fix is to not present data until a valid ip is presented by each remote unit for access-network configurations; remote operations will continue to be gated by presence of the unit name in allowed_hosts, but this won't be present until the initial negotiation has completed.