Compressed-HA disabling haproxy

Bug #1315102 reported by Ken Schroeder
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cisco Openstack
New
Undecided
Unassigned

Bug Description

In the compressed HA scenario, haproxy and keepalived gets installed on every server which is fine. There is only mechanism for MASTER/BACKUP however for load balacner and keepalived configurations. Master gets instantiated with VRRP priority of 101 and BACKUP gets instantiated with VRRP priority of 100, which really is only valid for two nodes. The third node using BACKUP still uses VRRP priority of 100 which may cause conflicts during failover scenario. We should try and parameterize priority variable or be able to account for 3 nodes in the compressed_ha scenario.

It is currently hardcoded in the openstack-ha/loadbalancer module of priority 100 and 101. Adding variable for priority that we could insert in hostname yaml would solve this issue and make node-003 in

openstack-ha::load-balancer::controller_priority: 103
openstack-ha::load-balancer::swift_proxy_priority: 103

Revision history for this message
Ken Schroeder (kschroed) wrote :

Any update on fixing this.

What needs updated is parametrization of openstack-ha/manifests/load-balancer.pp so these values are configurable. This will allow for 3 AIO type nodes in compressed_ha model where hostname.yaml over rides can account for priority on the 3rd node.

  if ($controller_state == 'MASTER') {
    $controller_priority = '101'
  } else {
    $controller_priority = '100'
  }

  if ($swift_proxy_state == 'MASTER') {
    $swift_proxy_priority = '101'
  } else {
    $swift_proxy_priority = '100'
  }

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.