"Can't connect to MySQL server on 'xx.xx.xx.xx' ([Errno 111] Connection refused)"

Bug #1747053 reported by Jason Hobbs
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Cinder Charm
Expired
Undecided
Unassigned
OpenStack Heat Charm
Expired
Undecided
Unassigned

Bug Description

A heat unit went into error status after heat db-sync failed to connect to the mysql database.

Traceback:
http://paste.ubuntu.com/26506874/

Bundle:
http://paste.ubuntu.com/26506876/

Revision history for this message
Jason Hobbs (jason-hobbs) wrote :
tags: added: foundations-engine
tags: removed: cpe-foundations
Revision history for this message
John George (jog) wrote :

Hit this again, this time from a cinder unit.

Revision history for this message
John George (jog) wrote :
Revision history for this message
Christian Reis (kiko) wrote :

This only is occurring in our DNS-HA runs, right?

Revision history for this message
Jason Hobbs (jason-hobbs) wrote :

No, we've hit it on non dnsHA runs too: https://solutions.qa.canonical.com/#/qa/bug/1747053

Revision history for this message
Ashley Lai (alai) wrote :

We hit this bug again. It is running xenial-pike with stable charms.

Revision history for this message
Ashley Lai (alai) wrote :
Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

Jason / Ashley,

The last update on this bug is over a year ago. There have bee a lot of changes to the charms in that time, so I'm going to mark this as incomplete, pending an update. If you have seen this occur more recently, please re-open this bug and we can dig in more!

Changed in charm-cinder:
status: New → Incomplete
Changed in charm-heat:
status: New → Incomplete
Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

Looking at the bundle + trace provided in the initial bug description, I suspect that the differing IPs between the errors, and the VIP that the percona charm is configured with will be a core issue.

The VIP in the bundle is "10.244.40.89" - but the network that the charm is trying to talk to Percona on is "192.168.33.142". The machine that percona-cluster/0 is on, as well as the IP address that the charm seems to be presenting, is "10.244.41.28." These mismatches suggest, to me, that there is a binding issue at play causing the units to be unable to communicate over their desired networks.

Some snippets from the juju status:

  mysql:
    charm: cs:~openstack-charmers-next/percona-cluster-285
    ...
    relations:
      cluster:
      - mysql
      ha:
      - hacluster-pxc
      shared-db:
      - aodh
      - cinder
      - designate
      - glance
      - gnocchi
      - heat
      - keystone
      - neutron-api
      - nova-cloud-controller
      - openstack-dashboard
    units:
      mysql/0:
        workload-status:
          current: active
          message: Unit is ready
          since: 02 Feb 2018 12:03:20Z
        juju-status:
          current: executing
          message: running db-admin-relation-changed hook
          since: 02 Feb 2018 12:03:21Z
          version: 2.4-beta1
        leader: true
        machine: 0/lxd/6
        open-ports:
        - 3306/tcp
        public-address: 10.244.41.28

machines:
  "0":
    juju-status:
      current: started
      since: 02 Feb 2018 11:27:48Z
      version: 2.4-beta1
    machine-status:
      current: running
      message: Deployed
      since: 02 Feb 2018 11:27:19Z
    series: xenial
    ...
    containers:
      ...
      0/lxd/6:
        juju-status:
          current: started
          since: 02 Feb 2018 11:32:39Z
          version: 2.4-beta1
        dns-name: 10.244.41.28
        ip-addresses:
        - 10.244.41.28
        - 192.168.33.142
        instance-id: juju-6fd836-0-lxd-6
        machine-status:
          current: running
          message: Container started
          since: 02 Feb 2018 11:31:53Z
        series: xenial
        network-interfaces:
          eth0:
            ip-addresses:
            - 10.244.41.28
            mac-address: 00:16:3e:d9:f0:72
            gateway: 10.244.40.1
            dns-nameservers:
            - 10.244.40.33
            - 10.244.40.30
            - 10.244.40.31
            space: oam-space
            is-up: true
          eth1:
            ip-addresses:
            - 192.168.33.142
            mac-address: 00:16:3e:61:cf:04
            space: internal-space
            is-up: true

Looking at the space bindings between the bundle and this snippet, we see that the &internal-space is in that 192/8 network, but the VIP provided to percona is in the OAM space. These options are incompatible.

Given that this is an HA charm deployment that I'm looking at, I'd like to see the non-ha bundle to go with the crashdump mentioning that we see it there as well.

I'm marking this as incomplete as, right now, I see a configuration issue in the bundle that would make me expect this to not work (mismatched network bindings).

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack heat charm because there has been no activity for 60 days.]

Changed in charm-heat:
status: Incomplete → Expired
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack cinder charm because there has been no activity for 60 days.]

Changed in charm-cinder:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.