haproxy enabled by default for keystone, cinder, openstack-dashboard, nova-cloud-controller and glance breaks deploying multiple light services to the same node.

Bug #1417407 reported by Samantha Jian-Pielak
32
This bug affects 7 people
Affects Status Importance Assigned to Milestone
Juju Charms Collection
New
Undecided
Unassigned

Bug Description

haproxy is enabled by default for keystone, cinder, openstack-dashboard, nova-cloud-controller, and glance.
cs:trusty/cinder-11
cs:trusty/glance-10
cs:trusty/keystone-11
cs:trusty/nova-cloud-controller-51
cs:trusty/openstack-dashboard-9

This is found when deploying these services on the same node. The haproxy config file for the new service overrides the previous one.

This was not the behavior when deploying
cs:trusty/cinder-10
cs:trusty/glance-9
cs:trusty/keystone-9
cs:trusty/nova-cloud-controller-50
cs:trusty/openstack-dashboard-8
The haproxy.cfg was not filled out.

Steps to reproduce:
1. juju bootstrap onto maas environment
2. juju deploy keystone --config=openstack.cfg
juju deploy glance --to 1
juju deploy nova-cloud-controller --to 1 --config=openstack.cfg
juju deploy openstack-dashboard --to 1
juju deploy cinder --to 1 --config=openstack.cfg

juju status: http://pastebin.ubuntu.com/10029605/
juju ssh into the unit to check /etc/haproxy/haproxy.cfg, and save the config before deploying new services.

Revision history for this message
Samantha Jian-Pielak (samantha-jian) wrote :
Revision history for this message
Jeff Lane  (bladernr) wrote :

I concur. To provide a little more detail, because this is enabled in the default charm, even non-HA openstacks become unusable when applying multiple services to the same machine outside of separate containers, which, until this point, has not really been an issue:

http://pastebin.ubuntu.com/10040950/

To really quickly duplicate it, all you need to do is:

juju bootstrap
juju deploy --to=0 --config=openstack.cfg keystone
juju ssh keystone/0
source ./nova.rc
keystone tenant-list
exit
juju deploy --to=0 --config=openstack.cfg nova-cloud-controller
juju ssh keystone/0
source ./nova.rc
keystone tenant-list

This is the contents of my openstack.cfg and nova.rc:

http://pastebin.ubuntu.com/10041027/

So I think now, the question is what is the suggested method for doing a non-HA installation? We were in the middle of working up a whitepaper with a customer when this change in the charms occurred. When we PoC'd our non-HA deployment and saved all the deployment steps for the whitepaper, this was not an issue.

Why do we simply not have charms called horizon and horizon-ha, nova-cloud-controller and nova-cloud-controller-ha and so on? (just curious why the decision to not rename them happened, was it a matter of maintenance in the future?)

summary: haproxy enabled by default for keystone, cinder, openstack-dashboard,
- nova-cloud-controller and glance
+ nova-cloud-controller and glance breaks deploying multiple light
+ services to the same node.
Revision history for this message
Miika Kankare (kuula) wrote :

So, is there a clean way of solving this?

Revision history for this message
SysXpert (sysxpert) wrote :

Using the latest charms the problem still exists. How can we actually cleanly fix this ?

Revision history for this message
SysXpert (sysxpert) wrote :

Can someone please at least have the decency to mention that this is a `won't fix`

Revision history for this message
Graeme Moss (graememoss) wrote :

Hi

This is still a problem in the new openstack charms

Example deploying keystone and openstack-dashboard in the same LXD will result in a race condition to over right the haproxy.cfg file. this needs to change to have unique config files per charm to stop this race condition.

Please can this be looked into as no one has looked at it since 1015.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.