Delete of last member does not remove it from haproxy config

Bug #1514510 reported by Michael Johnson
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
octavia
Fix Released
Critical
Eran Raichstein

Bug Description

I had setup a active/standby load balancer with two members. I then deleted them by UUID one after the other.
No errors were reported via CLI, o-cw, or amphora-agent logs but the last member is still present in the haproxy configuration file on the amphora. Logs indicated that a new configuration was pushed out and the listener restart command was run.
neutron lbaas-member-list shows no members are on the pool.

It is un-clear if this is an active/standby related bug or if it affects standalone mode as well. Could be related to bug #1494956

Eran Raichstein (eranra)
Changed in octavia:
assignee: nobody → Eran Raichstein (eranra)
Revision history for this message
Eran Raichstein (eranra) wrote :
Download full text (24.6 KiB)

RE-RUN BUG # 1514510

The bug is still reproducible on current Octavia version. I deleted two members from the pool and both are removed from Neutron member list but only first one is removed from configuration file

1. using latest octavia / devstack versions::

Octavia is on ::
commit cd5a50adb3a1a42bb1dfd5cc6e1d03029cd795e5
Author: Michael Johnson <email address hidden>
Date: Fri Dec 18 20:06:23 2015 +0000

2. Added /etc/octavia/octavia.conf loadbalancer_topology = ACTIVE_STANDBY (to force active standby configuration)
3. Executed ./stack.sh

DONE

4. Executed script to create : members, load balancer, listener, pool, and attach members (script content is listed bellow)

./create_octavia_demo_environment.sh fast

5. Checked that the LB works (didn't check for this bug any fail-over scenario)

curl 20.0.0.13

DONE

6. Listed nova VM's:

nova list --all-tenants

+--------------------------------------+----------------------------------------------+----------------------------------+--------+------------+-------------+------------------------------------------------------------------+
| ID | Name | Tenant ID | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------------------------------+----------------------------------+--------+------------+-------------+------------------------------------------------------------------+
| 03c8c314-91ae-4812-8f86-65f42ad79fd9 | amphora-9be90952-fa15-40bc-bab2-13d10b6583fd | 6200e5d3539b46e3b748b01d99fad689 | ACTIVE | - | Running | lb-mgmt-net=192.168.0.5; production=20.0.0.14; private=10.0.0.9 |
| fb96cd7a-f541-4226-954d-c9615d447ad3 | amphora-d2840264-61ed-4bee-9013-61b9acb04011 | 6200e5d3539b46e3b748b01d99fad689 | ACTIVE | - | Running | lb-mgmt-net=192.168.0.4; production=20.0.0.15; private=10.0.0.10 |
| 81564676-cfae-4bcc-a356-8df2b32f48c6 | demonode1 | 17e1e45ab43e4730b06789a0a1a2c642 | ACTIVE | - | Running | private=10.0.0.4 |
| e11c871c-e0fa-4f34-8b02-9a8417fac972 | demonode2 | 17e1e45ab43e4730b06789a0a1a2c642 | ACTIVE | - | Running | private=10.0.0.5 |
| 54e8ed66-a3a6-4d73-810b-5cabfde8c83e | demonode3 | 17e1e45ab43e4730b06789a0a1a2c642 | ACTIVE | - | Running | private=10.0.0.6 |
| 7201e674-88c4-4318-9fef-9194ad06639b | demonode4 | 17e1e45ab43e4730b06789a0a1a2c642 | ACTIVE | - | Running | private=10.0.0.7 |
| 15662291-66ef-4871-98f4-93490c79b7bb | demonode5 | 17e1e45ab43e4730b06789a0a1a2c642 | ACTIVE | - | Running | private=10.0.0.8 |
| 76180a42-d8d8-4c7b-90b3-1c00adf399f0 | stresser1 ...

Revision history for this message
Eran Raichstein (eranra) wrote :

SELECT * FROM octavia.member;

# project_id, id, pool_id, subnet_id, ip_address, protocol_port, weight, operating_status, enabled
, 340c12af-82d0-4f07-8a7a-89eb99fa6ae1, 225ad656-2610-4ce5-bf1d-bcedd3cae9ea, a7c9d665-60db-445f-ad86-49da016b52c9, 10.0.0.7, 80, 1, OFFLINE, 1
, 3c19fc1f-7a12-4705-9971-46b2c5fc1ae7, 225ad656-2610-4ce5-bf1d-bcedd3cae9ea, a7c9d665-60db-445f-ad86-49da016b52c9, 10.0.0.6, 80, 1, OFFLINE, 1
, e411f036-a1d6-465d-97a2-06b8074c32da, 225ad656-2610-4ce5-bf1d-bcedd3cae9ea, a7c9d665-60db-445f-ad86-49da016b52c9, 10.0.0.8, 80, 1, OFFLINE, 1

(only 3 entries, so the info was removed from the data base)

Revision history for this message
Eran Raichstein (eranra) wrote :

Bug is understood::

get_delete_member_flow original order:
1. DeleteModelObject
2. ListenerUpdate
3. DeleteMemberInDB
4. MarkLBAndListenerActiveInDB

since DeleteMemberInDB is called after ListenerUpdate the information still contains original (before deleted) list of members. I changed the order (i.e. switched task 2 with task 3) so now the DeleteMemberInDB happens before ListenerUpdate and so the information that sent to Amphora is with the updated list of memebers.

New get_delete_member_flow code looks like::

    def get_delete_member_flow(self):
        """Create a flow to delete a member

        :returns: The flow for deleting a member
        """
        delete_member_flow = linear_flow.Flow(constants.DELETE_MEMBER_FLOW)
        delete_member_flow.add(model_tasks.
                               DeleteModelObject(rebind={constants.OBJECT:
                                                         constants.MEMBER}))
        delete_member_flow.add(database_tasks.DeleteMemberInDB(
            requires=constants.MEMBER_ID))
        delete_member_flow.add(amphora_driver_tasks.ListenerUpdate(
            requires=[constants.LISTENER, constants.VIP]))
        delete_member_flow.add(database_tasks.
                               MarkLBAndListenerActiveInDB(
                                   requires=[constants.LOADBALANCER,
                                             constants.LISTENER]))

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to octavia (master)

Fix proposed to branch: master
Review: https://review.openstack.org/260605

Changed in octavia:
status: New → In Progress
Revision history for this message
Eran Raichstein (eranra) wrote :
Changed in octavia:
status: In Progress → Confirmed
status: Confirmed → Fix Committed
Revision history for this message
Eran Raichstein (eranra) wrote :

Confirmed, this is now fixed.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/267045

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on octavia (master)

Change abandoned by Eran Raichstein (<email address hidden>) on branch: master
Review: https://review.openstack.org/267045
Reason: Mistake in Change-Id

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to octavia (master)

Reviewed: https://review.openstack.org/260605
Committed: https://git.openstack.org/cgit/openstack/octavia/commit/?id=080f6101fd56ff799c68e420c158458428e5bc0f
Submitter: Jenkins
Branch: master

commit 080f6101fd56ff799c68e420c158458428e5bc0f
Author: EranRaichstein <email address hidden>
Date: Wed Jan 13 17:59:54 2016 +0200

    Fix a problem of memebrs not deleted from pool

    Get_delete_member_flow original tasks order was:

    1. DeleteModelObject
    2. ListenerUpdate
    3. DeleteMemberInDB
    4. MarkLBAndListenerActiveInDB

    Since memebrs are deleted from Listener only by DeleteMemberInDB,
    ListenerUpdate is being called with all members (including the
    member that should be deleted). The patch changes the order of
    tasks. Calling DeleteMemberInDB before ListenerUpdate (i.e.
    switch task 2 & 3) and thus the memebr is removed from HAProxy
    List and info is updated to Amphorae as requested.

    Change call to DeleteMemberInDB to take member and not member_id
    change call to ListenerUpdate to use constants.LOADBALANCER, constants.LISTENER

    Change-Id: I159831ca13fc8864798972f75d4b0c6e1fcecf26
    Closes-Bug: #1514510

Changed in octavia:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.