Designate: 'shared-db' incomplete when access-network configuration used in percona-cluster

Bug #1659805 reported by Tytus Kurek
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Designate Charm
Fix Released
Low
Liam Young
mysql-shared charm interface
Invalid
Medium
Unassigned

Bug Description

There is a deployment of OpenStack Mitaka on Ubuntu Trusty.

The problem is that designate units hang on "waiting" status with the following message: "'shared-db' incomplete", however designate service is related with percona-cluster service:

tytus@maas:~$ juju status designate
Model Controller Cloud/Region Version
openstack maas maas 2.0.1

App Version Status Scale Charm Store Rev OS Notes
designate 2.0.0 waiting 3 designate jujucharms 7 ubuntu
hacluster-designate active 3 hacluster jujucharms 31 ubuntu
rsyslog-forwarder-ha unknown 3 rsyslog-forwarder-ha jujucharms 10 ubuntu

Unit Workload Agent Machine Public address Ports Message
designate/0 waiting idle 0/lxd/5 100.127.20.64 'shared-db' incomplete
  hacluster-designate/2 active idle 100.127.20.64 Unit is ready and clustered
  rsyslog-forwarder-ha/46 unknown idle 100.127.20.64
designate/1* waiting idle 1/lxd/5 100.127.20.45 'shared-db' incomplete
  hacluster-designate/0* active idle 100.127.20.45 Unit is ready and clustered
  rsyslog-forwarder-ha/32 unknown idle 100.127.20.45
designate/2 waiting idle 2/lxd/5 100.127.20.53 'shared-db' incomplete
  hacluster-designate/1 active idle 100.127.20.53 Unit is ready and clustered
  rsyslog-forwarder-ha/35 unknown idle 100.127.20.53

Machine State DNS Inst id Series AZ
0 started 100.127.20.2 p3dh36 trusty default
0/lxd/5 started 100.127.20.64 juju-e0245e-0-lxd-5 trusty
1 started 100.127.20.3 bw8np7 trusty default
1/lxd/5 started 100.127.20.45 juju-e0245e-1-lxd-5 trusty
2 started 100.127.20.4 t3777h trusty default
2/lxd/5 started 100.127.20.53 juju-e0245e-2-lxd-5 trusty

Relation Provides Consumes Type
juju-info ceilometer rsyslog-forwarder-ha subordinate
juju-info ceph-mon-backup rsyslog-forwarder-ha subordinate
juju-info ceph-mon-openstack rsyslog-forwarder-ha subordinate
juju-info ceph-radosgw rsyslog-forwarder-ha subordinate
juju-info cinder rsyslog-forwarder-ha subordinate
cluster designate designate peer
dns-backend designate designate-bind regular
ha designate hacluster-designate subordinate
nova-designate designate hubudb1-sc-compute regular
identity-service designate keystone regular
shared-db designate percona-cluster regular
amqp designate rabbitmq-server regular
juju-info designate rsyslog-forwarder-ha subordinate
juju-info designate-bind rsyslog-forwarder-ha subordinate
juju-info glance rsyslog-forwarder-ha subordinate
hanode hacluster-designate hacluster-designate peer
juju-info hubudb1-sc-compute rsyslog-forwarder-ha subordinate
juju-info hubudb1-sc-osd-backup rsyslog-forwarder-ha subordinate
juju-info hubudb1-sc-osd-openstack rsyslog-forwarder-ha subordinate
juju-info keystone rsyslog-forwarder-ha subordinate
juju-info memcached rsyslog-forwarder-ha subordinate
juju-info mongodb rsyslog-forwarder-ha subordinate
juju-info neutron-api rsyslog-forwarder-ha subordinate
juju-info neutron-gateway rsyslog-forwarder-ha subordinate
juju-info neutron-openvswitch rsyslog-forwarder-ha subordinate
juju-info nova-cloud-controller rsyslog-forwarder-ha subordinate
juju-info openstack-dashboard rsyslog-forwarder-ha subordinate
juju-info rabbitmq-server rsyslog-forwarder-ha subordinate
syslog rsyslog-forwarder-ha rsyslog-primary regular
syslog rsyslog-forwarder-ha rsyslog-secondary regular

P.S.: I am not able to report this bug under https://bugs.launchpad.net/charms/+source/designate/+filebug

Colin Watson (cjwatson)
affects: launchpad → charm-designate
Revision history for this message
Tytus Kurek (tkurek) wrote :

The following message shows up in '/var/log/juju/unit-designate-0.log" file every 5 minutes:

INFO juju-log Invoking reactive handler: reactive/designate_handlers.py:43:setup_database

Revision history for this message
James Page (james-page) wrote :

The 5 minute log message is update-status hook running (probably not related to this issue).

Revision history for this message
James Page (james-page) wrote :

waiting status indicates the relation is present, but expected data from pxc -> designate has not been provided; we need to peek at the relation data between percona-cluster and designate.

Revision history for this message
Tytus Kurek (tkurek) wrote :

Please find attached relation data:

root@juju-3c76a4-1-lxd-5:/var/lib/juju/agents/unit-designate-1/charm# ls -l ../state/relations/103/
total 12
-rw-r--r-- 1 root root 18 Feb 3 11:35 percona-cluster-0
-rw-r--r-- 1 root root 18 Feb 3 11:32 percona-cluster-1
-rw-r--r-- 1 root root 18 Feb 3 11:35 percona-cluster-2
root@juju-3c76a4-1-lxd-5:/var/lib/juju/agents/unit-designate-1/charm# relation-get -r 103 - designate/0
designate_database: designate
designate_hostname: 100.127.20.59
designate_username: designate
dpm_database: dpm
dpm_hostname: 100.127.20.59
dpm_username: dpm
private-address: 100.127.20.59
root@juju-3c76a4-1-lxd-5:/var/lib/juju/agents/unit-designate-1/charm# relation-get -r 103 - designate/1
designate_database: designate
designate_hostname: 100.127.20.43
designate_username: designate
dpm_database: dpm
dpm_hostname: 100.127.20.43
dpm_username: dpm
private-address: 100.127.20.43
root@juju-3c76a4-1-lxd-5:/var/lib/juju/agents/unit-designate-1/charm# relation-get -r 103 - designate/2
designate_database: designate
designate_hostname: 100.127.20.33
designate_username: designate
dpm_database: dpm
dpm_hostname: 100.127.20.33
dpm_username: dpm
private-address: 100.127.20.33
root@juju-3c76a4-1-lxd-5:/var/lib/juju/agents/unit-designate-1/charm# relation-get -r 103 - percona-cluster/0
access-network: 100.127.21.0/24
private-address: 100.127.20.20
root@juju-3c76a4-1-lxd-5:/var/lib/juju/agents/unit-designate-1/charm# relation-get -r 103 - percona-cluster/1
private-address: 100.127.20.22
root@juju-3c76a4-1-lxd-5:/var/lib/juju/agents/unit-designate-1/charm# relation-get -r 103 - percona-cluster/2
private-address: 100.127.20.54

Revision history for this message
James Page (james-page) wrote :

Looks like PXC has not processed the database access request - anything in the juju unit logs on the percona-cluster lead unit with regards to the dpb and designate db's?

Revision history for this message
Tytus Kurek (tkurek) wrote :

Attached is the agent log from the percona-cluster/0 unit.

Revision history for this message
James Page (james-page) wrote :

OK I see the issue here - if access-network is configured on PXC, and units don't present a source hostname/IP that's within the subnet configured for access-network, PXC just won't give out any data to those units.

This appears to be a bit of a gap in the new reactive charm support for older configuration based network support in the charm set.

Revision history for this message
James Page (james-page) wrote :

The mysql-shared interface correctly detects the presentation of the access-network; I think that the charm should handle this, rather than the interface but that is a little divergent from the way the interface deals with Juju network spaces, which is to use the network space binding for the relation name in the actual interface.

James Page (james-page)
affects: charm-designate → charm-interface-mysql-shared
affects: charm-interface-mysql-shared → charm-designate
Changed in charm-designate:
status: New → Triaged
importance: Undecided → Low
Changed in charm-interface-mysql-shared:
status: New → Triaged
importance: Undecided → Medium
summary: - Designate: 'shared-db' incomplete
+ Designate: 'shared-db' incomplete when access-network configuration used
+ in percona-cluster
Tytus Kurek (tkurek)
tags: added: 4010
Changed in charm-designate:
assignee: nobody → Edward Hope-Morley (hopem)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-designate (master)

Fix proposed to branch: master
Review: https://review.openstack.org/502114

Changed in charm-designate:
status: Triaged → In Progress
Changed in charm-designate:
milestone: none → 18.02
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-designate (master)

Reviewed: https://review.openstack.org/502114
Committed: https://git.openstack.org/cgit/openstack/charm-designate/commit/?id=656cffa5a9343172db1327ea1b5407b185abcd71
Submitter: Jenkins
Branch: master

commit 656cffa5a9343172db1327ea1b5407b185abcd71
Author: Edward Hope-Morley <email address hidden>
Date: Fri Sep 8 17:09:13 2017 +0100

    Ensure access-network address provided if required

    When the shared-db relation is joined and the db application
    has provided an access-network, ensure that the local address
    we provide is withing the access network cidr.

    Change-Id: I180ed3f2eebf59848d12b091551cfc038f837f84
    Closes-Bug: 1659805

Changed in charm-designate:
status: In Progress → Fix Committed
James Page (james-page)
Changed in charm-designate:
status: Fix Committed → In Progress
James Page (james-page)
Changed in charm-designate:
assignee: Edward Hope-Morley (hopem) → Liam Young (gnuoy)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Reviewed: https://review.openstack.org/502447
Committed: https://git.openstack.org/cgit/openstack/charm-designate/commit/?id=acd7130048ce984c5dbecafaa7aa560e55be4d10
Submitter: Jenkins
Branch: master

commit acd7130048ce984c5dbecafaa7aa560e55be4d10
Author: Liam Young <email address hidden>
Date: Mon Sep 11 10:47:39 2017 +0000

    Ensure access-network address provided if required

    When the shared-db relation is joined and the db application
    has provided an access-network, ensure that the local address
    we provide is within the access network cidr.

    Closes-Bug: 1659805

    Change-Id: Ib953b7586a13a05264d9edbfc39c877ae9ca5c70

Changed in charm-designate:
status: In Progress → Fix Committed
Tytus Kurek (tkurek)
tags: added: cpe-onsite
Ryan Beisner (1chb1n)
Changed in charm-designate:
status: Fix Committed → Fix Released
Michał Ajduk (majduk)
Changed in charm-interface-mysql-shared:
status: Triaged → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.