HAProxy prechecks fail when scaling out to a new host with --limit or --serial

Bug #1868986 reported by Mark Goddard
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
kolla-ansible
Fix Released
Medium
Mark Goddard
Stein
Fix Released
Medium
Mark Goddard
Train
Fix Released
Medium
Mark Goddard
Ussuri
Fix Released
Medium
Mark Goddard

Bug Description

Steps to reproduce
------------------

Deploy HAProxy on one or more servers. Add another server to the inventory in the haproxy group, and run the following:

kolla-ansible prechecks --limit <new host>

Expected results
----------------

Prechecks pass

Actual results
--------------

TASK [haproxy : Checking if kolla_internal_vip_address and kolla_external_vip_address are not pingable from any node] **************************************************************************************
failed: [controller0] (item={u'command': u'ping', u'address': u'192.168.33.2'}) => {"ansible_loop_var": "item", "changed": false, "cmd": ["ping", "-c", "3", "192.168.33.2"], "delta": "0:00:02.059599", "end": "2020-03-24 17:59:56.080378", "failed_when_result": true, "item": {"address": "192.168.33.2", "command": "ping"}, "rc": 0, "start": "2020-03-24 17:59:54.020779", "stderr": "", "stderr_lines": [], "stdout": "PING 192.168.33.2 (192.168.33.2) 56(84) bytes of data.\n64 bytes from 192.168.33.2: icmp_seq=1 ttl=64 time=0.479 ms\n64 bytes from 192.168.33.2: icmp_seq=2 ttl=64 time=0.219 ms\n64 bytes from 192.168.33.2: icmp_seq=3 ttl=64 time=0.172 ms\n\n--- 192.168.33.2 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 57ms\nrtt min/avg/max/mdev = 0.172/0.290/0.479/0.135 ms", "stdout_lines": ["PING 192.168.33.2 (192.168.33.2) 56(84) bytes of data.", "64 bytes from 192.168.33.2: icmp_seq=1 ttl=64 time=0.479 ms", "64 bytes from 192.168.33.2: icmp_seq=2 ttl=64 time=0.219 ms", "64 bytes from 192.168.33.2: icmp_seq=3 ttl=64 time=0.172 ms", "", "--- 192.168.33.2 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 57ms", "rtt min/avg/max/mdev = 0.172/0.290/0.479/0.135 ms"]}
failed: [controller0] (item={u'command': u'ping', u'address': u'192.168.33.2'}) => {"ansible_loop_var": "item", "changed": false, "cmd": ["ping", "-c", "3", "192.168.33.2"], "delta": "0:00:02.032246", "end": "2020-03-24 17:59:58.576390", "failed_when_result": true, "item": {"address": "192.168.33.2", "command": "ping"}, "rc": 0, "start": "2020-03-24 17:59:56.544144", "stderr": "", "stderr_lines": [], "stdout": "PING 192.168.33.2 (192.168.33.2) 56(84) bytes of data.\n64 bytes from 192.168.33.2: icmp_seq=1 ttl=64 time=0.279 ms\n64 bytes from 192.168.33.2: icmp_seq=2 ttl=64 time=0.181 ms\n64 bytes from 192.168.33.2: icmp_seq=3 ttl=64 time=0.271 ms\n\n--- 192.168.33.2 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 30ms\nrtt min/avg/max/mdev = 0.181/0.243/0.279/0.047 ms", "stdout_lines": ["PING 192.168.33.2 (192.168.33.2) 56(84) bytes of data.", "64 bytes from 192.168.33.2: icmp_seq=1 ttl=64 time=0.279 ms", "64 bytes from 192.168.33.2: icmp_seq=2 ttl=64 time=0.181 ms", "64 bytes from 192.168.33.2: icmp_seq=3 ttl=64 time=0.271 ms", "", "--- 192.168.33.2 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 30ms", "rtt min/avg/max/mdev = 0.181/0.243/0.279/0.047 ms"]}

This happens because ansible does not execute on hosts where haproxy/keepalived is running, and therefore does not know that the VIP should be active.

Mark Goddard (mgoddard)
Changed in kolla-ansible:
importance: Undecided → Medium
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to kolla-ansible (master)

Fix proposed to branch: master
Review: https://review.opendev.org/714950

Changed in kolla-ansible:
assignee: nobody → Mark Goddard (mgoddard)
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to kolla-ansible (master)

Reviewed: https://review.opendev.org/714950
Committed: https://git.openstack.org/cgit/openstack/kolla-ansible/commit/?id=f3350d4e130b17b4035a8b67420d7015fc6cc99b
Submitter: Zuul
Branch: master

commit f3350d4e130b17b4035a8b67420d7015fc6cc99b
Author: Mark Goddard <email address hidden>
Date: Wed Mar 25 11:00:31 2020 +0000

    Fix HAProxy prechecks during scale-out with limit

    Deploy HAProxy on one or more servers. Add another server to the
    inventory in the haproxy group, and run the following:

    kolla-ansible prechecks --limit <new host>

    The following task will fail:

        TASK [haproxy : Checking if kolla_internal_vip_address and
        kolla_external_vip_address are not pingable from any node]

    This happens because ansible does not execute on hosts where
    haproxy/keepalived is running, and therefore does not know that the VIP
    should be active.

    This change skips VIP prechecks when not all HAProxy hosts are in the
    play.

    Closes-Bug: #1868986

    Change-Id: Ifbc73806b768f76f803ab01c115a9e5c2e2492ac

Changed in kolla-ansible:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to kolla-ansible (stable/train)

Fix proposed to branch: stable/train
Review: https://review.opendev.org/716915

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to kolla-ansible (stable/stein)

Fix proposed to branch: stable/stein
Review: https://review.opendev.org/716917

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to kolla-ansible (stable/train)

Reviewed: https://review.opendev.org/716915
Committed: https://git.openstack.org/cgit/openstack/kolla-ansible/commit/?id=79747c4d8dfcad54a120d3f587501e8e7300d5b7
Submitter: Zuul
Branch: stable/train

commit 79747c4d8dfcad54a120d3f587501e8e7300d5b7
Author: Mark Goddard <email address hidden>
Date: Wed Mar 25 11:00:31 2020 +0000

    Fix HAProxy prechecks during scale-out with limit

    Deploy HAProxy on one or more servers. Add another server to the
    inventory in the haproxy group, and run the following:

    kolla-ansible prechecks --limit <new host>

    The following task will fail:

        TASK [haproxy : Checking if kolla_internal_vip_address and
        kolla_external_vip_address are not pingable from any node]

    This happens because ansible does not execute on hosts where
    haproxy/keepalived is running, and therefore does not know that the VIP
    should be active.

    This change skips VIP prechecks when not all HAProxy hosts are in the
    play.

    Closes-Bug: #1868986

    Change-Id: Ifbc73806b768f76f803ab01c115a9e5c2e2492ac
    (cherry picked from commit f3350d4e130b17b4035a8b67420d7015fc6cc99b)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to kolla-ansible (stable/stein)

Reviewed: https://review.opendev.org/716917
Committed: https://git.openstack.org/cgit/openstack/kolla-ansible/commit/?id=2e514a8cf310afc7e617813ab2ad1080eb72a575
Submitter: Zuul
Branch: stable/stein

commit 2e514a8cf310afc7e617813ab2ad1080eb72a575
Author: Mark Goddard <email address hidden>
Date: Wed Mar 25 11:00:31 2020 +0000

    Fix HAProxy prechecks during scale-out with limit

    Deploy HAProxy on one or more servers. Add another server to the
    inventory in the haproxy group, and run the following:

    kolla-ansible prechecks --limit <new host>

    The following task will fail:

        TASK [haproxy : Checking if kolla_internal_vip_address and
        kolla_external_vip_address are not pingable from any node]

    This happens because ansible does not execute on hosts where
    haproxy/keepalived is running, and therefore does not know that the VIP
    should be active.

    This change skips VIP prechecks when not all HAProxy hosts are in the
    play.

    Closes-Bug: #1868986

    Change-Id: Ifbc73806b768f76f803ab01c115a9e5c2e2492ac
    (cherry picked from commit f3350d4e130b17b4035a8b67420d7015fc6cc99b)
    (cherry picked from commit 79747c4d8dfcad54a120d3f587501e8e7300d5b7)

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.