haproxy-playbook fails when installing on multiple hosts

Bug #1487409 reported by Jean-Philippe Evrard
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Fix Released
Low
Jean-Philippe Evrard
Kilo
Fix Released
Low
Jean-Philippe Evrard
Trunk
Fix Released
Low
Jean-Philippe Evrard

Bug Description

Haproxy tries to restart -at the end of the playbook (twice because of the post_tasks). When restarting it tries to bind with its VIP address.
If deploying HAProxy is done on multiple hosts and external_vip is different than internal_vip, the play will fail because HAProxy will fail to bind on all hosts except one. (The IP can only be used by one server at a time)

summary: - haproxy fails to restart
+ haproxy-playbook fails when installing on multiple hosts
description: updated
Changed in openstack-ansible:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to os-ansible-deployment (master)

Reviewed: https://review.openstack.org/215579
Committed: https://git.openstack.org/cgit/stackforge/os-ansible-deployment/commit/?id=9df04fed70eb9f9e84f6da2ac5bb4d94df037fe6
Submitter: Jenkins
Branch: master

commit 9df04fed70eb9f9e84f6da2ac5bb4d94df037fe6
Author: Jean-Philippe Evrard <email address hidden>
Date: Fri Aug 21 12:56:30 2015 +0200

    Fixing haproxy-playbook fails when installing on multiple hosts

    This bug is triggered when haproxy is deployed on multiple hosts
    and external_lb_vip is different than the internal one.
    As all host receive the same configuration, and are expected
    to restart the haproxy service more than once
    (once during role and once post_tasks), the playbook will fail,
    because the restart of the service fails. The restart of the
    service fails on some hosts because haproxy tries to
    start/bind to an ip the host doesn't have (avoiding ip conflicts)

    This allows haproxy to bind on non_local addresses by
    addng a sysctl change in the playbook: net.ipv4.ip_nonlocal_bind = 1
    The sysctl is changed for the containers/systems
    when external_lb_vip is different than internal address and
    the number of haproxy hosts is more than one thanks to a group_var.

    Side-effect: other services are able to bind on non-local addresses
    if the sysctl is changed.

    This could be overriden by setting the variable haproxy_bind_on_non_local
    in your user_* variables. If set to false, then the ip_non_local_bind
    sysctl won't be changed.

    Closes-Bug: #1487409

    Change-Id: I41b3a5a4ba2d48192b505e3720456a77484aa92b

Changed in openstack-ansible:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to os-ansible-deployment (kilo)

Fix proposed to branch: kilo
Review: https://review.openstack.org/220089

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to os-ansible-deployment (kilo)

Reviewed: https://review.openstack.org/220089
Committed: https://git.openstack.org/cgit/stackforge/os-ansible-deployment/commit/?id=763da9ed9d4d80cba77397af2bb17ce199b7f73f
Submitter: Jenkins
Branch: kilo

commit 763da9ed9d4d80cba77397af2bb17ce199b7f73f
Author: Jean-Philippe Evrard <email address hidden>
Date: Fri Aug 21 12:56:30 2015 +0200

    Fixing haproxy-playbook fails when installing on multiple hosts

    This bug is triggered when haproxy is deployed on multiple hosts
    and external_lb_vip is different than the internal one.
    As all host receive the same configuration, and are expected
    to restart the haproxy service more than once
    (once during role and once post_tasks), the playbook will fail,
    because the restart of the service fails. The restart of the
    service fails on some hosts because haproxy tries to
    start/bind to an ip the host doesn't have (avoiding ip conflicts)

    This allows haproxy to bind on non_local addresses by
    addng a sysctl change in the playbook: net.ipv4.ip_nonlocal_bind = 1
    The sysctl is changed for the containers/systems
    when external_lb_vip is different than internal address and
    the number of haproxy hosts is more than one thanks to a group_var.

    Side-effect: other services are able to bind on non-local addresses
    if the sysctl is changed.

    This could be overriden by setting the variable haproxy_bind_on_non_local
    in your user_* variables. If set to false, then the ip_non_local_bind
    sysctl won't be changed.

    Closes-Bug: #1487409

    Change-Id: I41b3a5a4ba2d48192b505e3720456a77484aa92b
    (cherry picked from commit 9df04fed70eb9f9e84f6da2ac5bb4d94df037fe6)

Revision history for this message
Davanum Srinivas (DIMS) (dims-v) wrote : Fix included in openstack/openstack-ansible 11.2.11

This issue was fixed in the openstack/openstack-ansible 11.2.11 release.

Revision history for this message
Doug Hellmann (doug-hellmann) wrote : Fix included in openstack/openstack-ansible 11.2.12

This issue was fixed in the openstack/openstack-ansible 11.2.12 release.

Revision history for this message
Davanum Srinivas (DIMS) (dims-v) wrote : Fix included in openstack/openstack-ansible 11.2.14

This issue was fixed in the openstack/openstack-ansible 11.2.14 release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.