haproxy.cfg backends do not reflect all cluster members - no loadbalancing
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Gnocchi Charm |
Fix Released
|
Critical
|
David Ames | ||
OpenStack AODH Charm |
Fix Released
|
High
|
David Ames | ||
OpenStack API Layer |
Fix Released
|
Critical
|
David Ames | ||
OpenStack Barbican Charm |
Fix Released
|
High
|
David Ames | ||
OpenStack Manila Charm |
Fix Released
|
High
|
David Ames | ||
OpenStack Masakari Charm |
Fix Released
|
High
|
David Ames | ||
OpenStack Nova Cell Controller Charm |
Fix Released
|
High
|
David Ames | ||
OpenStack Octavia Charm |
Fix Released
|
High
|
David Ames | ||
OpenStack Placement Charm |
Fix Released
|
High
|
David Ames | ||
charms.openstack |
Fix Released
|
Critical
|
David Ames |
Bug Description
When configuring gnocchi with separate admin-space, internal-space, and external-space networks, and setting multiple VIPs for internal-space and external-space, but not admin-space, along with having the "cluster" binding be on the admin-space, the backends in haproxy.cfg is missing non-localserver backends (gnocchi-0 only has gnocchi-0 backend for external/internal network frontends, gnocchi-1 only has itself, etc) such as:
frontend tcp-in_
bind *:8041
bind :::8041
acl net_192.168.1.39 dst 192.168.
use_backend gnocchi-
acl net_192.168.2.52 dst 192.168.
use_backend gnocchi-
acl net_192.168.0.82 dst 192.168.
use_backend gnocchi-
default_backend gnocchi-
backend gnocchi-
balance leastconn
server gnocchi-1 192.168.1.2:8031 check
backend gnocchi-
balance leastconn
server gnocchi-1 192.168.2.2:8031 check
backend gnocchi-
balance leastconn
server gnocchi-1 192.168.0.2:8031 check
server gnocchi-2 192.168.0.3:8031 check
server gnocchi-3 192.168.0.4:8031 check
to reproduce, create 3 networks, register 3 juju spaces, deploy gnocchi and hacluster along with basic keystone/mysql/rmq setup, configure gnocchi as such:
if your spaces are
oam-space: 192.168.0.0/24
internal-space: 192.168.1.0/24
external-space: 192.168.2.0/24
gnocchi:
options:
vip: 192.168.1.254 192.168.2.254
bindings:
"": oam-space
cluster: oam-space
internal: internal-space
admin: external-space
public: external-space
Once this setup is in place, the relation-get between gnocchi units will show only 192.168.0.X/24 addresses. When the gnocchi charm goes to configure the HAProxyContext, the cluster members for the front-ends on the external-space and internal-space do not have matching backend IPs, so this line in the template fails to render the other two backends for the functional networks, and only configures the incoming interface on the "cluster" binding interface as being load-balanced.
In the gnocchi charm, there is the file templates/
which uses this logic for adding backends to a service:
{% for unit, address in cluster.
Somehow, the gnocchi charm must detect all VIP-related network IPs the communicate over the cluster relation to allow for all frontends to be properly loadbalanced.
Expected result:
backend gnocchi-
balance leastconn
server gnocchi-1 192.168.1.2:8031 check
server gnocchi-1 192.168.1.3:8031 check
server gnocchi-1 192.168.1.4:8031 check
backend gnocchi-
balance leastconn
server gnocchi-1 192.168.2.2:8031 check
server gnocchi-1 192.168.2.3:8031 check
server gnocchi-1 192.168.2.4:8031 check
backend gnocchi-
balance leastconn
server gnocchi-1 192.168.0.2:8031 check
server gnocchi-2 192.168.0.3:8031 check
server gnocchi-3 192.168.0.4:8031 check
This is interweaved with lp#1857039 which points out that the 192.168.0.0/24 network doesn't get an apache vhost because it isn't a member IP of one of the admin/internal/
tags: | added: sts |
Changed in charm-gnocchi: | |
importance: | Undecided → Critical |
milestone: | none → 20.05 |
assignee: | nobody → David Ames (thedac) |
status: | New → Triaged |
Changed in layer-openstack-api: | |
importance: | Undecided → Critical |
Changed in charms.openstack: | |
importance: | Undecided → Critical |
Changed in layer-openstack-api: | |
assignee: | nobody → David Ames (thedac) |
Changed in charms.openstack: | |
assignee: | nobody → David Ames (thedac) |
Changed in layer-openstack-api: | |
milestone: | none → 20.05 |
Changed in layer-openstack-api: | |
status: | New → Fix Committed |
Changed in charm-aodh: | |
status: | New → Triaged |
importance: | Undecided → High |
assignee: | nobody → David Ames (thedac) |
milestone: | none → 20.05 |
Changed in charm-barbican: | |
assignee: | nobody → David Ames (thedac) |
importance: | Undecided → High |
milestone: | none → 20.05 |
status: | New → Triaged |
Changed in charm-manila: | |
assignee: | nobody → David Ames (thedac) |
importance: | Undecided → High |
milestone: | none → 20.05 |
status: | New → Triaged |
Changed in charm-masakari: | |
assignee: | nobody → David Ames (thedac) |
importance: | Undecided → High |
milestone: | none → 20.05 |
status: | New → Triaged |
Changed in charm-nova-cell-controller: | |
assignee: | nobody → David Ames (thedac) |
importance: | Undecided → High |
milestone: | none → 20.05 |
status: | New → Triaged |
Changed in charm-octavia: | |
assignee: | nobody → David Ames (thedac) |
importance: | Undecided → High |
milestone: | none → 20.05 |
status: | New → Triaged |
Changed in charm-placement: | |
assignee: | nobody → David Ames (thedac) |
importance: | Undecided → High |
milestone: | none → 20.05 |
status: | New → Triaged |
Changed in charm-gnocchi: | |
status: | Fix Committed → Fix Released |
Changed in layer-openstack-api: | |
status: | Fix Committed → Fix Released |
Changed in charm-barbican: | |
status: | Fix Committed → Fix Released |
Changed in charm-nova-cell-controller: | |
status: | Fix Committed → Fix Released |
Changed in charm-placement: | |
status: | Fix Committed → Fix Released |
Changed in charm-octavia: | |
status: | Fix Committed → Fix Released |
Changed in charm-manila: | |
status: | Fix Committed → Fix Released |
Changed in charm-aodh: | |
status: | Fix Committed → Fix Released |
Changed in charm-masakari: | |
status: | Fix Committed → Fix Released |
Subscribing field-high as this is affecting gnocchi performance on a production site and loadbalancing the API across the 3 units may help to alleviate the pressure on this service.