Looking at the bundle + trace provided in the initial bug description, I suspect that the differing IPs between the errors, and the VIP that the percona charm is configured with will be a core issue.
The VIP in the bundle is "10.244.40.89" - but the network that the charm is trying to talk to Percona on is "192.168.33.142". The machine that percona-cluster/0 is on, as well as the IP address that the charm seems to be presenting, is "10.244.41.28." These mismatches suggest, to me, that there is a binding issue at play causing the units to be unable to communicate over their desired networks.
Looking at the space bindings between the bundle and this snippet, we see that the &internal-space is in that 192/8 network, but the VIP provided to percona is in the OAM space. These options are incompatible.
Given that this is an HA charm deployment that I'm looking at, I'd like to see the non-ha bundle to go with the crashdump mentioning that we see it there as well.
I'm marking this as incomplete as, right now, I see a configuration issue in the bundle that would make me expect this to not work (mismatched network bindings).
Looking at the bundle + trace provided in the initial bug description, I suspect that the differing IPs between the errors, and the VIP that the percona charm is configured with will be a core issue.
The VIP in the bundle is "10.244.40.89" - but the network that the charm is trying to talk to Percona on is "192.168.33.142". The machine that percona-cluster/0 is on, as well as the IP address that the charm seems to be presenting, is "10.244.41.28." These mismatches suggest, to me, that there is a binding issue at play causing the units to be unable to communicate over their desired networks.
Some snippets from the juju status:
mysql: charmers- next/percona- cluster- 285 controller
workload- status:
juju-status: relation- changed hook
public- address: 10.244.41.28
charm: cs:~openstack-
...
relations:
cluster:
- mysql
ha:
- hacluster-pxc
shared-db:
- aodh
- cinder
- designate
- glance
- gnocchi
- heat
- keystone
- neutron-api
- nova-cloud-
- openstack-dashboard
units:
mysql/0:
current: active
message: Unit is ready
since: 02 Feb 2018 12:03:20Z
current: executing
message: running db-admin-
since: 02 Feb 2018 12:03:21Z
version: 2.4-beta1
leader: true
machine: 0/lxd/6
open-ports:
- 3306/tcp
machines:
juju-status:
ip-addresses:
instance- id: juju-6fd836-0-lxd-6
machine- status:
network- interfaces:
ip- addresses:
mac- address: 00:16:3e:d9:f0:72
gateway: 10.244.40.1
dns- nameservers:
ip- addresses:
mac- address: 00:16:3e:61:cf:04
"0":
juju-status:
current: started
since: 02 Feb 2018 11:27:48Z
version: 2.4-beta1
machine-status:
current: running
message: Deployed
since: 02 Feb 2018 11:27:19Z
series: xenial
...
containers:
...
0/lxd/6:
current: started
since: 02 Feb 2018 11:32:39Z
version: 2.4-beta1
dns-name: 10.244.41.28
- 10.244.41.28
- 192.168.33.142
current: running
message: Container started
since: 02 Feb 2018 11:31:53Z
series: xenial
eth0:
- 10.244.41.28
- 10.244.40.33
- 10.244.40.30
- 10.244.40.31
space: oam-space
is-up: true
eth1:
- 192.168.33.142
space: internal-space
is-up: true
Looking at the space bindings between the bundle and this snippet, we see that the &internal-space is in that 192/8 network, but the VIP provided to percona is in the OAM space. These options are incompatible.
Given that this is an HA charm deployment that I'm looking at, I'd like to see the non-ha bundle to go with the crashdump mentioning that we see it there as well.
I'm marking this as incomplete as, right now, I see a configuration issue in the bundle that would make me expect this to not work (mismatched network bindings).