Comment 3 for bug 1487409

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to os-ansible-deployment (kilo)

Reviewed: https://review.openstack.org/220089
Committed: https://git.openstack.org/cgit/stackforge/os-ansible-deployment/commit/?id=763da9ed9d4d80cba77397af2bb17ce199b7f73f
Submitter: Jenkins
Branch: kilo

commit 763da9ed9d4d80cba77397af2bb17ce199b7f73f
Author: Jean-Philippe Evrard <email address hidden>
Date: Fri Aug 21 12:56:30 2015 +0200

    Fixing haproxy-playbook fails when installing on multiple hosts

    This bug is triggered when haproxy is deployed on multiple hosts
    and external_lb_vip is different than the internal one.
    As all host receive the same configuration, and are expected
    to restart the haproxy service more than once
    (once during role and once post_tasks), the playbook will fail,
    because the restart of the service fails. The restart of the
    service fails on some hosts because haproxy tries to
    start/bind to an ip the host doesn't have (avoiding ip conflicts)

    This allows haproxy to bind on non_local addresses by
    addng a sysctl change in the playbook: net.ipv4.ip_nonlocal_bind = 1
    The sysctl is changed for the containers/systems
    when external_lb_vip is different than internal address and
    the number of haproxy hosts is more than one thanks to a group_var.

    Side-effect: other services are able to bind on non-local addresses
    if the sysctl is changed.

    This could be overriden by setting the variable haproxy_bind_on_non_local
    in your user_* variables. If set to false, then the ip_non_local_bind
    sysctl won't be changed.

    Closes-Bug: #1487409

    Change-Id: I41b3a5a4ba2d48192b505e3720456a77484aa92b
    (cherry picked from commit 9df04fed70eb9f9e84f6da2ac5bb4d94df037fe6)