The "dns-slaves" option does not create "pools.yaml" file

Bug #1693162 reported by Tytus Kurek
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Designate Charm
Fix Released
Undecided
Alex Kavanagh

Bug Description

It looks like the "dns-slaves" option is not fully implemented in designate charm.

There is OpenStack installation based on the following tool set:

MaaS: 2.2.0-rc4
Juju: 2.1.2
Charms: 17.02
Ubuntu: Trusty (with hwe-x kernel)

The designate charm has been deployed with the following settings:

  designate:
    charm: "./charms/designate"
    series: "trusty"
    num_units: 3
    constraints: "spaces=provisioning"
    bindings:
      admin: "openstack-admin"
      internal: "openstack-internal"
      public: "openstack-public"
      shared-db: "openstack-internal"
    options:
      debug: "true"
      enable-host-header: "true"
      openstack-origin: "cloud:trusty-mitaka"
      region: "RegionOne"
      use-syslog: "true"
      verbose: "true"
      vip: "10.24.111.14 10.24.112.14 10.24.113.14"
    to:
      - "lxd:controller/0"
      - "lxd:controller/1"
      - "lxd:controller/2"

The designate-bind charm has been deployed with the following settings:

  designate-bind:
    charm: "./charms/designate-bind"
    series: "trusty"
    num_units: 3
    constraints: "spaces=provisioning"
    options:
      debug: "true"
      use-syslog: "true"
      verbose: "true"
    to:
      - "lxd:controller/0"
      - "lxd:controller/1"
      - "lxd:controller/2"

The designate application has been related to all other applications according to the official documentation apart from the designate-bind application. Instead, "dns-slaves" option has to be used (the reasoning behind that is out of the scope of this bug report). Thus, the "dns-slaves" option has been configured as follows:

juju config designate dns-slaves="10.24.110.153:5354:ziGKFkwlUQiCFswtXeFncA== 10.24.110.212:5354:ziGKFkwlUQiCFswtXeFncA== 10.24.110.204:5354:ziGKFkwlUQiCFswtXeFncA=="

However, neither the "/etc/designate/pools.yaml" nor the "/etc/designate/rndc.key" files have been created on designate application units. Also, Juju still puts the units in the "blocked" state with "'dns-backend' missing" message.

Additional outputs for troubleshooting purposes:

tytus@maas:~$ juju status designate
Model Controller Cloud/Region Version
openstack maas maas 2.1.2

App Version Status Scale Charm Store Rev OS Notes
designate 2.0.0 blocked 3 designate local 0 ubuntu
hacluster-designate active 3 hacluster local 0 ubuntu
rsyslog-forwarder-ha unknown 3 rsyslog-forwarder-ha local 0 ubuntu

Unit Workload Agent Machine Public address Ports Message
designate/6 blocked idle 0/lxd/19 10.24.110.191 9001/tcp 'dns-backend' missing
  hacluster-designate/7 active executing 10.24.110.191 Unit is ready and clustered
  rsyslog-forwarder-ha/64 unknown idle 10.24.110.191
designate/7* blocked idle 1/lxd/19 10.24.110.210 9001/tcp 'dns-backend' missing
  hacluster-designate/6* active idle 10.24.110.210 Unit is ready and clustered
  rsyslog-forwarder-ha/63 unknown idle 10.24.110.210
designate/8 blocked idle 2/lxd/19 10.24.110.158 9001/tcp 'dns-backend' missing
  hacluster-designate/8 active executing 10.24.110.158 Unit is ready and clustered
  rsyslog-forwarder-ha/65 unknown idle 10.24.110.158

Machine State DNS Inst id Series AZ
0 started 10.24.110.164 p3dh36 trusty default
0/lxd/19 started 10.24.110.191 juju-c19341-0-lxd-19 trusty
1 started 10.24.110.179 bw8np7 trusty default
1/lxd/19 started 10.24.110.210 juju-c19341-1-lxd-19 trusty
2 started 10.24.110.242 t3777h trusty default
2/lxd/19 started 10.24.110.158 juju-c19341-2-lxd-19 trusty

Relation Provides Consumes Type
juju-info ceilometer rsyslog-forwarder-ha subordinate
juju-info ceph-mon-backup rsyslog-forwarder-ha subordinate
juju-info ceph-mon-openstack rsyslog-forwarder-ha subordinate
juju-info ceph-radosgw-backup rsyslog-forwarder-ha subordinate
juju-info cinder rsyslog-forwarder-ha subordinate
nova-designate compute designate regular
juju-info compute rsyslog-forwarder-ha subordinate
cluster designate designate peer
ha designate hacluster-designate subordinate
identity-service designate keystone regular
shared-db designate percona-cluster regular
amqp designate rabbitmq-server regular
juju-info designate rsyslog-forwarder-ha subordinate
juju-info designate-bind rsyslog-forwarder-ha subordinate
juju-info glance rsyslog-forwarder-ha subordinate
hanode hacluster-designate hacluster-designate peer
juju-info keystone rsyslog-forwarder-ha subordinate
juju-info memcached rsyslog-forwarder-ha subordinate
juju-info mongodb rsyslog-forwarder-ha subordinate
juju-info neutron-api rsyslog-forwarder-ha subordinate
juju-info neutron-gateway rsyslog-forwarder-ha subordinate
juju-info neutron-openvswitch rsyslog-forwarder-ha subordinate
juju-info nova-cloud-controller rsyslog-forwarder-ha subordinate
juju-info openstack-dashboard rsyslog-forwarder-ha subordinate
juju-info rabbitmq-server rsyslog-forwarder-ha subordinate
syslog rsyslog-forwarder-ha rsyslog-primary regular
syslog rsyslog-forwarder-ha rsyslog-secondary regular
juju-info storage-backup rsyslog-forwarder-ha subordinate
juju-info storage-openstack rsyslog-forwarder-ha subordinate

tytus@maas:~$ juju status designate-bind
Model Controller Cloud/Region Version
openstack maas maas 2.1.2

App Version Status Scale Charm Store Rev OS Notes
designate-bind 9.9.5.dfsg active 3 designate-bind local 0 ubuntu
rsyslog-forwarder-ha unknown 3 rsyslog-forwarder-ha local 0 ubuntu

Unit Workload Agent Machine Public address Ports Message
designate-bind/3 active idle 0/lxd/18 10.24.110.153 Unit is ready
  rsyslog-forwarder-ha/58 unknown idle 10.24.110.153
designate-bind/4 active idle 1/lxd/18 10.24.110.212 Unit is ready
  rsyslog-forwarder-ha/57 unknown idle 10.24.110.212
designate-bind/5* active idle 2/lxd/18 10.24.110.204 Unit is ready
  rsyslog-forwarder-ha/59 unknown idle 10.24.110.204

Machine State DNS Inst id Series AZ
0 started 10.24.110.164 p3dh36 trusty default
0/lxd/18 started 10.24.110.153 juju-c19341-0-lxd-18 trusty
1 started 10.24.110.179 bw8np7 trusty default
1/lxd/18 started 10.24.110.212 juju-c19341-1-lxd-18 trusty
2 started 10.24.110.242 t3777h trusty default
2/lxd/18 started 10.24.110.204 juju-c19341-2-lxd-18 trusty

Relation Provides Consumes Type
juju-info ceilometer rsyslog-forwarder-ha subordinate
juju-info ceph-mon-backup rsyslog-forwarder-ha subordinate
juju-info ceph-mon-openstack rsyslog-forwarder-ha subordinate
juju-info ceph-radosgw-backup rsyslog-forwarder-ha subordinate
juju-info cinder rsyslog-forwarder-ha subordinate
juju-info compute rsyslog-forwarder-ha subordinate
juju-info designate rsyslog-forwarder-ha subordinate
cluster designate-bind designate-bind peer
juju-info designate-bind rsyslog-forwarder-ha subordinate
juju-info glance rsyslog-forwarder-ha subordinate
juju-info keystone rsyslog-forwarder-ha subordinate
juju-info memcached rsyslog-forwarder-ha subordinate
juju-info mongodb rsyslog-forwarder-ha subordinate
juju-info neutron-api rsyslog-forwarder-ha subordinate
juju-info neutron-gateway rsyslog-forwarder-ha subordinate
juju-info neutron-openvswitch rsyslog-forwarder-ha subordinate
juju-info nova-cloud-controller rsyslog-forwarder-ha subordinate
juju-info openstack-dashboard rsyslog-forwarder-ha subordinate
juju-info rabbitmq-server rsyslog-forwarder-ha subordinate
syslog rsyslog-forwarder-ha rsyslog-primary regular
syslog rsyslog-forwarder-ha rsyslog-secondary regular
juju-info storage-backup rsyslog-forwarder-ha subordinate
juju-info storage-openstack rsyslog-forwarder-ha subordinate

root@juju-c19341-0-lxd-19:~# ls -l /etc/designate/
total 60
-rw-r--r-- 1 designate designate 2953 Apr 7 2016 api-paste.ini
drwxr-xr-x 2 designate designate 4096 Apr 11 2016 conf.d
-rw-r--r-- 1 designate designate 14156 Apr 11 2016 designate.conf
-rw-r--r-- 1 designate designate 14156 Apr 11 2016 designate.conf.sample
-rw-r--r-- 1 designate designate 4617 Apr 7 2016 policy.json
-rw-r--r-- 1 designate designate 949 Apr 11 2016 rootwrap.conf
-rw-r--r-- 1 designate designate 949 Apr 7 2016 rootwrap.conf.sample
drwxr-xr-x 2 designate designate 4096 May 24 09:04 rootwrap.d

Revision history for this message
Tytus Kurek (tkurek) wrote :

Additional output for troubleshooting purposes:

root@juju-c19341-0-lxd-18:~# cat /etc/bind/rndc.key
key "rndc-key" {
 algorithm hmac-md5;
 secret "ziGKFkwlUQiCFswtXeFncA==";

Changed in charm-designate:
assignee: nobody → Alex Kavanagh (ajkavanagh)
Changed in charm-designate:
status: New → In Progress
Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

Having investigated the charm code, it's true that unless a 'bind-rndc' interface is provided to the charm, then it won't become active. Currently, this is the designate-bind charm.

The charms does seem to support the dns-slaves option, and there is functionality to write the config to the relevant file, but it can't be activated due to the above problem.

I'm going to try to make the bind-rndc interface optional, and only block the charm if both the bind-rndc AND the dns-slaves option are missing. The charm should also be 'broken' if both are available as this would be a conflict (I think.).

Revision history for this message
Tytus Kurek (tkurek) wrote :

Alex,

I have tried the candidate from cs:~ajkavanagh/xenial/designate-0. It no longer claims that the "dns-backend" relation is missing. However, it didn't create the "pools.yaml" file. Below are the steps I've taken so far:

$ cat designate.yaml

designate:
  debug: "true"
  dns-slaves: "10.24.110.225:5354:adAjl7nb9Y1FCwFj0Qb8lw== 10.24.110.156:5354:adAjl7nb9Y1FCwFj0Qb8lw== 10.24.110.220:5354:adAjl7nb9Y1FCwFj0Qb8lw=="
  openstack-origin: "cloud:trusty-mitaka"
  region: "RegionOne"
  use-syslog: "true"
  verbose: "true"
  vip: "10.24.110.14"

$ juju deploy ./charms/designate --series trusty --constraints="spaces=provisioning" --bind "shared-db=openstack-internal" --config ./designate.yaml --to lxd:0 designate
$ juju add-unit designate --to lxd:1
$ juju add-unit designate --to lxd:2
$ juju add-relation designate percona-cluster
$ juju add-relation designate rabbitmq-server
$ juju add-relation designate keystone

Juju status is as follows:

$ juju status designate

Model Controller Cloud/Region Version
openstack maas maas 2.1.2

App Version Status Scale Charm Store Rev OS Notes
designate active 3 designate local 2 ubuntu

Unit Workload Agent Machine Public address Ports Message
designate/6* active idle 0/lxd/20 10.24.110.250 9001/tcp Unit is ready
designate/7 active idle 1/lxd/18 10.24.110.181 9001/tcp Unit is ready
designate/8 active idle 2/lxd/18 10.24.110.202 9001/tcp Unit is ready

Machine State DNS Inst id Series AZ
0 started 10.24.110.251 p3dh36 trusty default
0/lxd/20 started 10.24.110.250 juju-a65552-0-lxd-20 trusty
1 started 10.24.110.192 bw8np7 trusty default
1/lxd/18 started 10.24.110.181 juju-a65552-1-lxd-18 trusty
2 started 10.24.110.197 t3777h trusty default
2/lxd/18 started 10.24.110.202 juju-a65552-2-lxd-18 trusty

Relation Provides Consumes Type
cluster designate designate peer
identity-service designate keystone regular
shared-db designate percona-cluster regular
amqp designate rabbitmq-server regular

The content of "/etc/designate" on deployed units is as follows:

root@juju-a65552-0-lxd-20:~# ls -l /etc/designate/
total 60
-rw-r--r-- 1 designate designate 2953 Apr 7 2016 api-paste.ini
drwxr-xr-x 2 designate designate 4096 Apr 11 2016 conf.d
-rw-r--r-- 1 designate designate 14156 Apr 11 2016 designate.conf
-rw-r--r-- 1 designate designate 14156 Apr 11 2016 designate.conf.sample
-rw-r--r-- 1 designate designate 4617 Apr 7 2016 policy.json
-rw-r--r-- 1 designate designate 949 Apr 11 2016 rootwrap.conf
-rw-r--r-- 1 designate designate 949 Apr 7 2016 rootwrap.conf.sample
drwxr-xr-x 2 designate designate 4096 May 31 11:14 rootwrap.d

Attached are Juju log files from deployed units.

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

Thanks Tytus:

I must've misunderstood about the pools.yaml file. It does (and should) write to /etc/designate/rndc_{name}.key files with the appropriate data from dns-slaves. e.g. /etc/designate/rndc_10_24_110_225.key. There is also a pools.yaml template which ought to get written. Is there a /var/log/juju/unit-designate-*.log file for the unit to see what its doing?

Also, we should take this email conversation back to the bug. Please could you cut/paste the previous details and then the log files into the bug, and then I will re-respond there too. Thanks. (ref: https://bugs.launchpad.net/charm-designate/+bug/1693162)

I've also done a quick test with:

juju config designate dns-slaves="10.24.110.225:5354:adAjl7nb9Y1FCwFj0Qb8lw== 10.24.110.156:5354:adAjl7nb9Y1FCwFj0Qb8lw== 10.24.110.220:5354:adAjl7nb9Y1FCwFj0Qb8lw=="

And my /etc/designate looks like this:

api-paste.ini
conf.d
designate.conf
designate.conf.sample
policy.json
pools.yaml
rndc_10_24_110_156.key
rndc_10_24_110_220.key
rndc_10_24_110_225.key
rndc.key
rootwrap.conf
rootwrap.conf.sample
rootwrap.d

with the relevant info in the pools.yaml. So, this is odd.

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

I've done some additional testing and ON install, it doesn't write the files. Sorting that out now.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-designate (master)

Fix proposed to branch: master
Review: https://review.openstack.org/469590

Revision history for this message
Tytus Kurek (tkurek) wrote :
Download full text (6.5 KiB)

Hi Alex,

I have tried the candidate from cs:~ajkavanagh/xenial/designate-2 and everything seems to be working fine apart from one thing - it doesn't configure VIP on the unit nodes while the "vip" options is set:

tytus@maas:~$ juju config designate vip
10.24.110.14

tytus@maas:~$ openstack catalog list
+------------+----------------+-----------------------------------------------------------------------------+
| Name | Type | Endpoints |
+------------+----------------+-----------------------------------------------------------------------------+
| designate | dns | RegionOne |
| | | publicURL: http://10.24.110.159:9001 |
| | | internalURL: http://10.24.110.159:9001 |
| | | adminURL: http://10.24.110.159:9001 |

tytus@maas:~$ juju ssh designate/15 sudo ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
175: eth0@if176: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:a8:ed:6e brd ff:ff:ff:ff:ff:ff
    inet 10.24.110.237/24 brd 10.24.110.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fea8:ed6e/64 scope link
       valid_lft forever preferred_lft forever
177: eth1@if178: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f2:65:ed brd ff:ff:ff:ff:ff:ff
    inet 10.24.111.58/24 brd 10.24.111.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fef2:65ed/64 scope link
       valid_lft forever preferred_lft forever
179: eth2@if180: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:65:6b:9e brd ff:ff:ff:ff:ff:ff
    inet 10.24.112.63/24 brd 10.24.112.255 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe65:6b9e/64 scope link
       valid_lft forever preferred_lft forever
181: eth3@if182: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:dd:2e:01 brd ff:ff:ff:ff:ff:ff
    inet 10.24.113.58/24 brd 10.24.113.255 scope global eth3
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fedd:2e01/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.24.112.63 closed.
tytus@maas:~$ juju ssh designate/16 sudo ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft ...

Read more...

Revision history for this message
Tytus Kurek (tkurek) wrote :

Actually, it turned out to be an issue on my side. I forgot to re-add the relation between designate and hacluster charms after re-deploying designate application.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-designate (master)

Reviewed: https://review.openstack.org/469590
Committed: https://git.openstack.org/cgit/openstack/charm-designate/commit/?id=fd98fa0722a0cb0bba832bc232ce089cdf553b47
Submitter: Jenkins
Branch: master

commit fd98fa0722a0cb0bba832bc232ce089cdf553b47
Author: Alex Kavanagh <email address hidden>
Date: Wed May 31 17:44:57 2017 +0100

    Resolve config option dns-slaves not working properly

    The charm didn't work unless a mandatory bind-rndc interface was
    present. However, the dns-slaves option was supposed to provide an
    alternative to needing to relate the unit to the (say) designate-bind
    charm. This change allows dns-slaves and/or a bind-rndc relation
    and configures the underlying service accordingly.

    Also some fixups to the tests to simplify handler verification using
    more recent charms.openstack features.

    Also note that the bind-rndc interface needed a fix (the depends-on)
    for if/when a bind-rndc relation is removed; the interface incorrectly
    maintained that the relation was present when it was not.

    Change-Id: Ib2c883e623292520224f882aef09d3710e1e1348
    Closes-Bug: #1693162
    Depends-On: I523fecff4e80734772872a8a6d2507f1e2162ae3

Changed in charm-designate:
status: In Progress → Fix Committed
James Page (james-page)
Changed in charm-designate:
milestone: none → 17.08
James Page (james-page)
Changed in charm-designate:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.