haproxy causing continuous service restart on status-update
Bug #1698814 reported by
Gábor Mészáros
This bug affects 2 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Designate Charm |
Fix Released
|
High
|
Alex Kavanagh |
Bug Description
Designate keeps restarting all services, since haproxy.cfg is being regenerated every 5 minutes.
tags: | added: 4010 |
Changed in charm-designate: | |
assignee: | nobody → Alex Kavanagh (ajkavanagh) |
importance: | Undecided → High |
Changed in charm-designate: | |
status: | New → In Progress |
Changed in charm-designate: | |
milestone: | none → 17.08 |
Changed in charm-designate: | |
status: | Fix Committed → Fix Released |
To post a comment you must log in.
Info: Reading through the logs, the key parts are:
update-status hook: designate_ handlers. py:66:setup_ amqp_req designate_ handlers. py:122: configure_ designate_ full api_public port is already in use haproxy. cfg root:root 444 ← this is when it is re-written (ha proxy)
Invoking reactive handler: reactive/
Invoking reactive handler: reactive/
... then does apt update
... x 2 (i.e. does apt update twice)
2017-06-19 13:01:26 WARNING juju-log Not adding haproxy listen stanza for designate-api_int port is already in use
2017-06-19 13:01:26 WARNING juju-log Not adding haproxy listen stanza for designate-
2017-06-19 13:01:26 INFO juju-log Writing file /etc/haproxy/
and then: zone-manager stop/waiting pool-manager stop/waiting zone-manager start/running, process 36701 pool-manager start/running, process 36735 designate_ handlers. py:84:setup_ endpoint designate_ handlers. py:74:setup_ database designate_ handlers. py:143: cluster_ connected designate_ handlers. py:47:set_ dns_config_ available designate_ handlers. py:116: update_ peers
2017-06-19 13:01:26 INFO update-status * Stopping haproxy haproxy
2017-06-19 13:01:26 INFO update-status ...done.
2017-06-19 13:01:26 INFO update-status designate-mdns stop/waiting
2017-06-19 13:01:31 INFO update-status designate-
2017-06-19 13:01:31 INFO update-status designate-agent stop/waiting
2017-06-19 13:01:36 INFO update-status designate-
2017-06-19 13:01:36 INFO update-status designate-central stop/waiting
2017-06-19 13:01:36 INFO update-status designate-sink stop/waiting
2017-06-19 13:01:36 INFO update-status designate-api stop/waiting
2017-06-19 13:01:36 INFO update-status * Starting haproxy haproxy
2017-06-19 13:01:36 INFO update-status [WARNING] 169/130136 (36667) : config : proxy 'stats' has no 'bind' directive. Please declare it as a backend if this was intended.
2017-06-19 13:01:36 INFO update-status ...done.
2017-06-19 13:01:37 INFO update-status designate-mdns start/running, process 36684
2017-06-19 13:01:37 INFO update-status designate-
2017-06-19 13:01:37 INFO update-status designate-agent start/running, process 36718
2017-06-19 13:01:37 INFO update-status designate-
2017-06-19 13:01:37 INFO update-status designate-central start/running, process 36752
2017-06-19 13:01:37 INFO update-status designate-sink start/running, process 36769
2017-06-19 13:01:37 INFO update-status designate-api start/running, process 36786
... then writes a whole bunch of config files
... and then invokes a bunch of handlers:
2017-06-19 13:01:42 INFO juju-log Invoking reactive handler: reactive/
2017-06-19 13:01:43 INFO juju-log Invoking reactive handler: reactive/
2017-06-19 13:01:44 INFO juju-log Invoking reactive handler: reactive/
2017-06-19 13:01:44 INFO juju-log Invoking reactive handler: reactive/
2017-06-19 13:01:44 INFO juju-log Invoking reactive handler: reactive/
... and then it's over.
Conclusions:
1. It's doing WAY too much in the update status hook (apt get update, re-writing lots of files, etc.). haproxy. cfg is causing the problem which is probably in charms.openstack.
2. It looks like the write to /etc/haproxy/
More investigation to follow.