VIP in haproxy and VIP VritualHost missing in Apache SSL frontend config

Bug #1928395 reported by David Ames
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Keystone Charm
New
Undecided
Unassigned

Bug Description

Due to Juju LP Bug #1897261, a unit's VIP address can be incorrectly propogated over a relation. Although, juju desperately needs to fix this bug. The OpenStack Charms may be able to mitigate per Drew Freiberger.

https://bugs.launchpad.net/juju/+bug/1897261

Launchpad Bug #1897261 “network-get returns the vip as ingress address” : Bugs : juju
In a customer deployment, we see that network-get returns the vip instead of the container's local address. In the environment, there's a ceilometer application with vip's set up and vip is no longer associated with the unit. This is reported already in the nrpe charm: https://bugs.launchpad.net...

The crux of the issue is after the VIP shows up in hpaproxy the VirtualHost for the VIP is removed from the Apache config. And SSL termination fails from that point on.

Can we mitigate this by guaranteeing the VIP VirtualHost stays in openstack_frontend_ssl.conf?

Full context of Drew's discussion:

"""
We're finding that random HA apps are trading their VIP as ingress-address/private-address/egress-subnets and then that screws up ssl endpoint resolution because that VIP ends up in an haproxy backend

and that haproxy backend then hits the apache instance on a non-ssl port because virtualhost isn't defined on that IP, so it's going to the fallback apache config instead of the ssl/wsgi config

$ juju config keystone vip
10.0.9.39 10.0.11.39 10.0.10.39
$ juju run -a keystone 'cat /etc/haproxy/haproxy.cfg' | pp|egrep '10.0.9.39|10.0.11.39|10.0.10.39'
    server keystone-0 10.0.9.39:35347 check
    server keystone-0 10.0.9.39:4990 check
    server keystone-0 10.0.9.39:35347 check
    server keystone-0 10.0.9.39:4990 check

This probably hit openstack-dashboard 4 times during the x->b upgrade. basically touching mysql, rmq, or keystone provokes this potential random bug against every single application that relates to keystone as they all hand around their cluster-relation-changed. We can't even do safe config-changes of any of those apps. Bootstack probably fixes this 2 times a day on random apps in random clouds

But it's specifically that the VIP isn't in the openstack_frontend_ssl.conf after the haproxy gets the VIP in a backend config

This is how to fix.
juju run -u keystone/0 'relation-set -r cluster:25 ingress-address=10.0.9.74 egress-subnets=10.0.9.74/32 private-address=10.0.9.74'

And you can repro, by doing the same thing except finding the VIP-holder and setting those three bits to the VIP

I realize it's juju's bug to fix, but I'd rather fail to less HA than broken ssl.

"""

Note: Filing this against keystone but this will affect all API or HA charms.

Tags: aubergine
tags: added: aubergine
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.