Pacemaker reports haproxy_monitor not running

Bug #1800980 reported by Xav Paice
22
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Gnocchi Charm
Fix Released
High
James Page
OpenStack AODH Charm
Fix Released
High
James Page
OpenStack Barbican Charm
Fix Released
High
James Page
OpenStack Designate Charm
Fix Released
High
James Page
charms.openstack
Fix Released
Undecided
Liam Young

Bug Description

When deploying 3 units with cs-hacluster-49, charm 18.08 (cs:aodh--17, designate-21, , after a short while (sometimes minutes, sometimes hours) pacemaker reports the following:

(example):
ubuntu@juju-5b85c6-19-lxd-0:~$ sudo crm status
Last updated: Thu Nov 1 02:23:56 2018 Last change: Wed Sep 19 05:07:40 2018 by hacluster via crmd on juju-5b85c6-19-lxd-0
Stack: corosync
Current DC: juju-5b85c6-16-lxd-0 (version 1.1.14-70404b0) - partition with quorum
3 nodes and 6 resources configured

Online: [ juju-5b85c6-16-lxd-0 juju-5b85c6-19-lxd-0 juju-5b85c6-20-lxd-0 ]

Full list of resources:

 Resource Group: grp_aodh_vips
     res_aodh_eth0_vip (ocf::heartbeat:IPaddr2): Started juju-5b85c6-19-lxd-0
     res_aodh_eth1_vip (ocf::heartbeat:IPaddr2): Started juju-5b85c6-19-lxd-0
     res_aodh_eth2_vip (ocf::heartbeat:IPaddr2): Started juju-5b85c6-19-lxd-0
 Clone Set: cl_res_aodh_haproxy [res_aodh_haproxy]
     Started: [ juju-5b85c6-16-lxd-0 juju-5b85c6-19-lxd-0 juju-5b85c6-20-lxd-0 ]

Failed Actions:
* res_aodh_haproxy_monitor_5000 on juju-5b85c6-20-lxd-0 'not running' (7): call=2062, status=complete, exitreason='none',
    last-rc-change='Wed Oct 24 04:13:28 2018', queued=0ms, exec=0ms
* res_aodh_haproxy_monitor_5000 on juju-5b85c6-16-lxd-0 'not running' (7): call=2106, status=complete, exitreason='none',
    last-rc-change='Tue Oct 23 22:01:57 2018', queued=0ms, exec=0ms
* res_aodh_haproxy_monitor_5000 on juju-5b85c6-19-lxd-0 'not running' (7): call=2074, status=complete, exitreason='none',
    last-rc-change='Tue Oct 23 02:24:55 2018', queued=0ms, exec=0ms

This affects at the least aodh, designate, and gnocchi.

Looking a a bunch of affected units, I'm seeing unit logs during update-status hooks which call render_config(), and a bunch of relation-get, network-list, etc. If that's restarting or refreshing services, that'll trigger pacemaker to report a failure. Looking at process ages, for example in a Gnocchi unit, I see the apache and wsgi processes are only a matter of hours old when no changes were made recently. Example log from the same time as 'last-rc-change' from a gnocchi unit: https://pastebin.canonical.com/p/9YZGhbNzYY/

~$ juju run --unit gnocchi/0 'charms.reactive -p get_flags'
['amqp.connected',
 'charm.installed',
 'charms.openstack.do-default-charm.installed',
 'charms.openstack.do-default-config.changed',
 'charms.openstack.do-default-identity-service.available',
 'charms.openstack.do-default-identity-service.connected',
 'charms.openstack.do-default-shared-db.connected',
 'charms.openstack.do-default-update-status',
 'cluster.available',
 'cluster.connected',
 'config.rendered',
 'coordinator-memcached.available',
 'coordinator-memcached.connected',
 'db.synced',
 'gnocchi-installed',
 'ha.available',
 'ha.connected',
 'haproxy.stat.password',
 'identity-service.available',
 'identity-service.available.auth',
 'identity-service.connected',
 'metric-service.connected',
 'run-default-upgrade-charm',
 'shared-db.available',
 'shared-db.connected',
 'ssl.enabled',
 'storage-ceph.available',
 'storage-ceph.connected',
 'storage-ceph.pools.available']

Revision history for this message
Xav Paice (xavpaice) wrote :

Logs from an aodh unit - it appears to be re-rendering the config without checking if it needs to - https://pastebin.canonical.com/p/C47KMbMRmc/

Paul Gear (paulgear)
summary: - Pacemaker reports haproxy_montor not running
+ Pacemaker reports haproxy_monitor not running
Revision history for this message
Xav Paice (xavpaice) wrote :
Revision history for this message
Liam Young (gnuoy) wrote :

Hi Xav, it is normal for the openstack charms to render config whether it is needed or not. The service restart checker looks for a change and only restart the services if the associated config file has changed. I think there are potentially two issues here:

1) Something is changing in the file(s) that shouldn't be. Lets find what it is and fix it.
2) update-status is doing more work than it needs to. I think other charms have a fix to this so I'll see if I can dig it out.

TBH I'm not convinced that the correct solution is to create a state that blocks rendering of config.

Revision history for this message
Liam Young (gnuoy) wrote :

I haven't been able to reproduce with the openstack provider which does not have network spaces. I'll have another try with spaces in play. It may be ordering inside haproxy for example, although I believe that was fixed prior to 18.08

Revision history for this message
Xav Paice (xavpaice) wrote :

I'm happy to abandon those reviews, would appreciate some attention on finding the fix.

It's interesting to note that all 3 charms affected (that we know of, there may be more) are reactive. Maybe there's something common between them that is the root cause.

Revision history for this message
Xav Paice (xavpaice) wrote :

Added charms.openstack, as configs are either changing too often, or restart_on_change is restarting when there's no change.

Revision history for this message
Billy Olsen (billy-olsen) wrote :

I've reviewed the code a bit and from the logs I've seen of an instance which has this issue, the haproxy service is always restarted when the haproxy.conf file is rewritten (which is intentional of course).

The logs show the line "DEBUG juju-log Writing file /etc/haproxy/haproxy.cfg root:designate 640" whenever the file is written to disk. The file is only written to disk if the contents of the file on disk differs from the contents pending to be written to disk. All input and variables are in a static state, so this must be something along the lines of order changing for values in the file.

Upon studying the code, I'm fairly positive this has to do with inconsistent dict traversal across hook invocations. If we look at charmhelpers.contrib.openstack.context.HAProxyContext [0] the cluster_hosts is a standard python dict:

        cluster_hosts = {}

which has host information added to it based on the relations and networks it retrieves. The data added to the cluster_hosts dict is populated consistently due to self.address_types being a list (consistent traversal ordering) and consistent traversal of related_units (its sorted AND there's only a single cluster relation).

After population, the primitive python dict is supplied to the jinja template for building out the haproxy.conf file. This is passed to the jinja template as the frontends variable:

        ctxt = {
            'frontends': cluster_hosts,
            'default_backend': addr
        }

The frontends var is iterated on in the jinja template when filling out frontend and backend details:

    {% for frontend in frontends -%}
    acl net_{{ frontend }} dst {{ frontends[frontend]['network'] }}
    use_backend {{ service }}_{{ frontend }} if net_{{ frontend }}
    {% endfor -%}

and

    {% for unit, address in frontends[frontend]['backends'].items() -%}
    server {{ unit }} {{ address }}:{{ ports[1] }} check
    {% endfor %}

Since the jinja templates are using the dict as read only, its likely that the ordering is consistent within the rendering of the template (as the iteration occurs within the same hook invocation and python interpreter runtime).

The change to fix this should be simple enough, simply instantiate the cluster_hosts as a collections.OrderedDict rather than a primitive python dict.

Revision history for this message
Billy Olsen (billy-olsen) wrote :

Raised issue #237 against charmhelpers.

https://github.com/juju/charm-helpers/issues/237

Revision history for this message
Liam Young (gnuoy) wrote :

I can't reproduce with network spaces + maas using:

App Version Status Scale Charm Store Rev OS Notes
aodh 7.0.0 active 3 aodh jujucharms 17 ubuntu
aodh-hacluster active 3 hacluster jujucharms 49 ubuntu
ceilometer 11.0.0 active 3 ceilometer jujucharms 255 ubuntu
ceilometer-hacluster active 3 hacluster jujucharms 49 ubuntu
gnocchi 4.3.1 active 3 gnocchi jujucharms 11 ubuntu
gnocchi-hacluster active 3 hacluster jujucharms 49 ubuntu

If you have an environment exhibiting this behaviour can you take a copy of the charm managed files and then when a restart occurs check what has changed. For aodh the files of interest are:

/etc/apache2/sites-enabled/aodh-api.conf
/etc/haproxy/haproxy.cfg"
/etc/aodh/aodh.conf
/etc/aodh/api-paste.ini

It would also be really useful to monitor the data on the ha relation. So if you could gather that before and after an unexpected restart that would be really useful. Something like:

juju run --application aodh-hacluster "relation-get -r \$(relation-ids ha) - \$(relation-list -r \$(relation-ids ha))"

Changed in charm-gnocchi:
status: New → Incomplete
Changed in charm-aodh:
status: New → Incomplete
Changed in charm-designate:
status: New → Incomplete
Revision history for this message
James Troup (elmo) wrote : Re: [Bug 1800980] Re: Pacemaker reports haproxy_monitor not running

Liam Young <email address hidden> writes:

> I can't reproduce with network spaces + maas using:

As a low key passive aggressive aside, it'd be nice if there was
enough logging by whatever decides the config has changed (and needs
to be re-rendered) that you wouldn't need to reproduce it
independently.

In general, for operations, knowing why and when service config has
been updated is super useful, I wouldn't even hide it behind DEBUG.

--
James

Revision history for this message
Liam Young (gnuoy) wrote :

Billy, I'm sure you are right and haproxy is the culprit. However, the reactive charms don't use the part of charm helpers you mention and already use an ordereddict for cluster_hosts. Perhaps Bug #1754149 or Bug #1737776 are not entirely resolved and this is a duplicate of one of those. As Elmo says, logging what is changing in the files would be very useful.

Liam Young (gnuoy)
Changed in charms.openstack:
assignee: nobody → Liam Young (gnuoy)
Revision history for this message
Liam Young (gnuoy) wrote :

elmo, xavpaice, I was thinking about the suggestion to log diffs but I'm worried about the security implications of potentially logging passwords etc to the logs. Would keeping the last N backups of a file in a secure location on the unit be acceptable so they can be manually diff'd ?

Revision history for this message
Xav Paice (xavpaice) wrote :

Unfortunately in order to reproduce this, I need to allow the alerts to happen which isn't something we can easily achieve in a production environment given that I've put in a temporary workaround so we have monitoring. Can you reproduce this in a test environment and then log the differences there?

Revision history for this message
Xav Paice (xavpaice) wrote :

Sample sdiff from the haproxy configs, about 5 mins apart:

frontend tcp-in_aodh-api_admin frontend tcp-in_aodh-api_admin
    bind *:8042 bind *:8042
    bind :::8042 bind :::8042
    acl net_10.244.32.53 dst 10.244.32.53/255.255.255.0 <
    use_backend aodh-api_admin_10.244.32.53 if net_10.244.32. <
    acl net_192.168.33.57 dst 192.168.33.57/255.255.255.0 acl net_192.168.33.57 dst 192.168.33.57/255.255.255.0
    use_backend aodh-api_admin_192.168.33.57 if net_192.168.3 use_backend aodh-api_admin_192.168.33.57 if net_192.168.3
                                                              > acl net_10.244.32.53 dst 10.244.32.53/255.255.255.0
                                                              > use_backend aodh-api_admin_10.244.32.53 if net_10.244.32.
    acl net_10.245.208.156 dst 10.245.208.156/255.255.255.0 acl net_10.245.208.156 dst 10.245.208.156/255.255.255.0
    use_backend aodh-api_admin_10.245.208.156 if net_10.245.2 use_backend aodh-api_admin_10.245.208.156 if net_10.245.2
    default_backend aodh-api_admin_10.245.208.156 default_backend aodh-api_admin_10.245.208.156

backend aodh-api_admin_10.244.32.53 <
    balance leastconn <
    server aodh-2 10.244.32.53:8032 check <
                                                              <
backend aodh-api_admin_192.168.33.57 backend aodh-api_admin_192.168.33.57
    balance leastconn balance leastconn
    server aodh-2 192.168.33.57:8032 check server aodh-2 192.168.33.57:8032 check
                                                              >
                                                              > backend aodh-api_admin_10.244.32.53
                                                              > balance leastconn
                                                              > server aodh-2 10.244.32.53:8032 check

So, looks very much like sorting.

Revision history for this message
Xav Paice (xavpaice) wrote :

That formatting for #14 was revolting, sorry about that... https://pastebin.canonical.com/p/qF6DFnkgkx/

Changed in charm-gnocchi:
status: Incomplete → New
Changed in charm-aodh:
status: Incomplete → New
Changed in charm-designate:
status: Incomplete → New
Revision history for this message
Billy Olsen (billy-olsen) wrote :
Download full text (3.7 KiB)

From the files Xav collected, it can be seen that the iteration ordering of the cluster.cluster_hosts value changes in the haproxy.cfg file. Checking the first two samples the change can be observed in the haproxy.cfg file. Note that the 10.244.32.48 and 192.168.33.52 are switched in the two examples, causing the file to be re-rendered.

=== haproxy.cfg.juju-d7d014-0-lxd-0.20181122233454 ===
frontend tcp-in_aodh-api_admin
    bind *:8042
    bind :::8042
    acl net_10.244.32.48 dst 10.244.32.48/255.255.255.0
    use_backend aodh-api_admin_10.244.32.48 if net_10.244.32.48
    acl net_192.168.33.52 dst 192.168.33.52/255.255.255.0
    use_backend aodh-api_admin_192.168.33.52 if net_192.168.33.52
    acl net_10.245.208.190 dst 10.245.208.190/255.255.255.0
    use_backend aodh-api_admin_10.245.208.190 if net_10.245.208.190
    default_backend aodh-api_admin_10.245.208.190

backend aodh-api_admin_10.244.32.48
    balance leastconn
    server aodh-0 10.244.32.48:8032 check

backend aodh-api_admin_192.168.33.52
    balance leastconn
    server aodh-0 192.168.33.52:8032 check

=== haproxy.cfg.juju-d7d014-0-lxd-0.20181122235548 ===
frontend tcp-in_aodh-api_admin
    bind *:8042
    bind :::8042
    acl net_192.168.33.52 dst 192.168.33.52/255.255.255.0
    use_backend aodh-api_admin_192.168.33.52 if net_192.168.33.52
    acl net_10.244.32.48 dst 10.244.32.48/255.255.255.0
    use_backend aodh-api_admin_10.244.32.48 if net_10.244.32.48
    acl net_10.245.208.190 dst 10.245.208.190/255.255.255.0
    use_backend aodh-api_admin_10.245.208.190 if net_10.245.208.190
    default_backend aodh-api_admin_10.245.208.190

backend aodh-api_admin_192.168.33.52
    balance leastconn
    server aodh-0 192.168.33.52:8032 check

backend aodh-api_admin_10.244.32.48
    balance leastconn
    server aodh-0 10.244.32.48:8032 check

==============
Checking the code, it adds the network split addresses first:

        # Note(AJK) - bug #1698814 - cluster_hosts needs to be ordered so that
        # re-writes with no changed data don't cause a restart (dictionaries
        # are 'randomly' ordered)
        self.cluster_hosts = collections.OrderedDict()
        if relation:
            self.add_network_split_addresses()
            self.add_default_addresses()

And the add_network_split_addresses() will iterate the ADDRESS_TYPES variable for the ordering to add the addresses:

        """Populate cluster_hosts with addresses of this unit and its
           peers on each address type

           @return None
        """
        for addr_type in ADDRESS_TYPES:
            cfg_opt = os_ip.ADDRESS_MAP[addr_type]['config']
            laddr = ch_ip.get_relation_ip(
            ...

Since this changes, let's check the type of the ADDRESS_TYPES, which is indeed a simple option of the keys() parameters:

ADDRESS_TYPES = os_ip.ADDRESS_MAP.keys()

And the ADDRESS_MAP is a simple dict (defined in charms_openstack/ip.py):

ADDRESS_MAP = {
    PUBLIC: {
        'binding': 'public',
        'config': 'os-public-network',
        'fallback': 'public-address',
        'override': 'os-public-hostname',
    },
    INTERNAL: {
        'binding': 'internal',
        'config': 'os-internal-n...

Read more...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charms.openstack (master)

Reviewed: https://review.openstack.org/619774
Committed: https://git.openstack.org/cgit/openstack/charms.openstack/commit/?id=7c54bc9597efb2173984aeff131e1f924da6153f
Submitter: Zuul
Branch: master

commit 7c54bc9597efb2173984aeff131e1f924da6153f
Author: Billy Olsen <email address hidden>
Date: Fri Nov 23 10:07:05 2018 -0700

    Consistent ordering of ADDRESS_TYPES when traversing dict

    charms_openstack.adapters.ADDRESS_TYPES is traversed when filling
    in network split information for haproxy configuration file generation.
    The ADDRESS_TYPES constant is built from the keys of a simple python
    dictionary, which provides no guaranteed ordering. This causes the
    charm to render slight variations of the haproxy.cfg file and the
    haproxy service is restarted as a result.

    This is fixed by simply sorting the keys to ensure consistent ordering.
    The list is reverse sorted so that the iteration occurs in the order
    of public -> internal -> admin.

    Change-Id: I89b2ee42b5827d0fe5bbb2ff7051e1bb5bd08c63
    Closes-Bug: #1800980

Changed in charms.openstack:
status: New → Fix Released
Revision history for this message
James Page (james-page) wrote :

I don't think this made it for 18.11 - however a rebuild of both master branch and stable charms will pickup the required fix.

Changed in charm-designate:
status: New → Triaged
Changed in charm-aodh:
status: New → Triaged
Changed in charm-gnocchi:
status: New → Triaged
importance: Undecided → High
Changed in charm-aodh:
importance: Undecided → High
Changed in charm-designate:
importance: Undecided → High
Changed in charm-barbican:
status: New → Triaged
importance: Undecided → High
Changed in charm-gnocchi:
assignee: nobody → James Page (james-page)
Changed in charm-aodh:
assignee: nobody → James Page (james-page)
Changed in charm-barbican:
assignee: nobody → James Page (james-page)
Changed in charm-designate:
assignee: nobody → James Page (james-page)
Changed in charm-gnocchi:
milestone: none → 19.04
Changed in charm-aodh:
milestone: none → 19.04
Changed in charm-barbican:
milestone: none → 19.04
Changed in charm-designate:
milestone: none → 19.04
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-designate (master)

Change abandoned by James Page (<email address hidden>) on branch: master
Review: https://review.openstack.org/614979
Reason: Fix done in charms.openstack

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-aodh (master)

Change abandoned by James Page (<email address hidden>) on branch: master
Review: https://review.openstack.org/614696
Reason: Fix done in charms.openstack

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-gnocchi (master)

Change abandoned by James Page (<email address hidden>) on branch: master
Review: https://review.openstack.org/614691
Reason: Fix done in charms.openstack

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-barbican (master)

Reviewed: https://review.openstack.org/620289
Committed: https://git.openstack.org/cgit/openstack/charm-barbican/commit/?id=402f48a17185fea63722e10a49446b56411e03c6
Submitter: Zuul
Branch: master

commit 402f48a17185fea63722e10a49446b56411e03c6
Author: James Page <email address hidden>
Date: Tue Nov 27 10:38:00 2018 +0000

    Rebuild for haproxy restart issues

    Rebuild charm to pickup latest changes to charms.openstack to
    resolve issues with haproxy being restarted due to random
    dict iteration.

    Change-Id: I4af99978ca18015530026cb99702e4abd6dde6b7
    Closes-Bug: 1800980

Changed in charm-barbican:
status: Triaged → Fix Committed
Changed in charm-gnocchi:
status: Triaged → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-gnocchi (master)

Reviewed: https://review.openstack.org/620286
Committed: https://git.openstack.org/cgit/openstack/charm-gnocchi/commit/?id=b335717e8de8b6f6eaa6435843de7f9763c7dc9f
Submitter: Zuul
Branch: master

commit b335717e8de8b6f6eaa6435843de7f9763c7dc9f
Author: James Page <email address hidden>
Date: Tue Nov 27 10:35:38 2018 +0000

    Rebuild for haproxy restart issues

    Rebuild charm to pickup latest changes to charms.openstack to
    resolve issues with haproxy being restarted due to random
    dict iteration.

    Change-Id: I276aa16bf0b64c3d72a60e11f527e9c8639f4a63
    Closes-Bug: 1800980

Changed in charm-designate:
status: Triaged → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-designate (master)

Reviewed: https://review.openstack.org/620288
Committed: https://git.openstack.org/cgit/openstack/charm-designate/commit/?id=03778b8cb51f3629569169d5d4ce55421546e88c
Submitter: Zuul
Branch: master

commit 03778b8cb51f3629569169d5d4ce55421546e88c
Author: James Page <email address hidden>
Date: Tue Nov 27 10:37:38 2018 +0000

    Rebuild for haproxy restart issues

    Rebuild charm to pickup latest changes to charms.openstack to
    resolve issues with haproxy being restarted due to random
    dict iteration.

    Change-Id: I96643fbe74f0413c6e44b4c78e0c8a5effbb5d8f
    Closes-Bug: 1800980

Changed in charm-aodh:
status: Triaged → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-aodh (master)

Reviewed: https://review.openstack.org/620287
Committed: https://git.openstack.org/cgit/openstack/charm-aodh/commit/?id=e93a8da4a70c08931a3dd5164cc2e7afa78d3a5b
Submitter: Zuul
Branch: master

commit e93a8da4a70c08931a3dd5164cc2e7afa78d3a5b
Author: James Page <email address hidden>
Date: Tue Nov 27 10:36:57 2018 +0000

    Rebuild for haproxy restart issues

    Rebuild charm to pickup latest changes to charms.openstack to
    resolve issues with haproxy being restarted due to random
    dict iteration.

    Change-Id: I58f2d452ad4c25f4892b753498fbfad824b77df1
    Closes-Bug: 1800980

Revision history for this message
Drew Freiberger (afreiberger) wrote :

Per field-medium SLA, these commits should be backported to 18.11 charms. Can we get an ETA/feasibility of the backport?

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-aodh (stable/18.11)

Fix proposed to branch: stable/18.11
Review: https://review.openstack.org/621712

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-designate (stable/18.11)

Fix proposed to branch: stable/18.11
Review: https://review.openstack.org/621713

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-gnocchi (stable/18.11)

Fix proposed to branch: stable/18.11
Review: https://review.openstack.org/621714

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-barbican (stable/18.11)

Fix proposed to branch: stable/18.11
Review: https://review.openstack.org/621715

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Stable charm backport reviews are raised. For clarity, this is the charms.openstack fix which will make its way into the stable rebuilds:

https://review.openstack.org/#/c/619774/

Revision history for this message
Ryan Beisner (1chb1n) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-designate (stable/18.11)

Reviewed: https://review.openstack.org/621713
Committed: https://git.openstack.org/cgit/openstack/charm-designate/commit/?id=96857ba0bcc0a638fa5c18bb68b9fe71d04692bb
Submitter: Zuul
Branch: stable/18.11

commit 96857ba0bcc0a638fa5c18bb68b9fe71d04692bb
Author: James Page <email address hidden>
Date: Tue Nov 27 10:37:38 2018 +0000

    Rebuild for haproxy restart issues

    Rebuild charm to pickup latest changes to charms.openstack to
    resolve issues with haproxy being restarted due to random
    dict iteration.

    Change-Id: I96643fbe74f0413c6e44b4c78e0c8a5effbb5d8f
    Closes-Bug: 1800980
    (cherry picked from commit 03778b8cb51f3629569169d5d4ce55421546e88c)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-barbican (stable/18.11)

Reviewed: https://review.openstack.org/621715
Committed: https://git.openstack.org/cgit/openstack/charm-barbican/commit/?id=e79147330aa08886bc0dd552cd0da1d5e365c515
Submitter: Zuul
Branch: stable/18.11

commit e79147330aa08886bc0dd552cd0da1d5e365c515
Author: James Page <email address hidden>
Date: Tue Nov 27 10:38:00 2018 +0000

    Rebuild for haproxy restart issues

    Rebuild charm to pickup latest changes to charms.openstack to
    resolve issues with haproxy being restarted due to random
    dict iteration.

    Change-Id: I4af99978ca18015530026cb99702e4abd6dde6b7
    Closes-Bug: 1800980
    (cherry picked from commit 402f48a17185fea63722e10a49446b56411e03c6)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-gnocchi (stable/18.11)

Reviewed: https://review.openstack.org/621714
Committed: https://git.openstack.org/cgit/openstack/charm-gnocchi/commit/?id=69fca01448241148b052bf39dcfec6dfbd1440dc
Submitter: Zuul
Branch: stable/18.11

commit 69fca01448241148b052bf39dcfec6dfbd1440dc
Author: James Page <email address hidden>
Date: Tue Nov 27 10:35:38 2018 +0000

    Rebuild for haproxy restart issues

    Rebuild charm to pickup latest changes to charms.openstack to
    resolve issues with haproxy being restarted due to random
    dict iteration.

    Change-Id: I276aa16bf0b64c3d72a60e11f527e9c8639f4a63
    Closes-Bug: 1800980
    (cherry picked from commit b335717e8de8b6f6eaa6435843de7f9763c7dc9f)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-aodh (stable/18.11)

Reviewed: https://review.openstack.org/621712
Committed: https://git.openstack.org/cgit/openstack/charm-aodh/commit/?id=c93acc15fc469dba4dc541e47213375514dc56fc
Submitter: Zuul
Branch: stable/18.11

commit c93acc15fc469dba4dc541e47213375514dc56fc
Author: James Page <email address hidden>
Date: Tue Nov 27 10:36:57 2018 +0000

    Rebuild for haproxy restart issues

    Rebuild charm to pickup latest changes to charms.openstack to
    resolve issues with haproxy being restarted due to random
    dict iteration.

    Change-Id: I58f2d452ad4c25f4892b753498fbfad824b77df1
    Closes-Bug: 1800980
    (cherry picked from commit e93a8da4a70c08931a3dd5164cc2e7afa78d3a5b)

James Page (james-page)
Changed in charm-designate:
status: Fix Committed → Fix Released
Changed in charm-barbican:
status: Fix Committed → Fix Released
Changed in charm-aodh:
status: Fix Committed → Fix Released
Changed in charm-gnocchi:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.