After deleting load balancer , vip info is still existing in database table

Bug #1573725 reported by min wang
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
octavia
Fix Released
High
Elena Ezhova

Bug Description

So I created a load balancer and deleted it afterwords, i expected to see the lb and amphora are shown deleted and vip info relates to this lb will be cleared as well, but i can still see it in the data table, is this designed in purpose and when will we clear up these info then?

mysql> select * from load_balancer;
+----------------------------------+--------------------------------------+------+-------------+---------------------+------------------+---------+----------+-----------------+
| project_id | id | name | description | provisioning_status | operating_status | enabled | topology | server_group_id |
+----------------------------------+--------------------------------------+------+-------------+---------------------+------------------+---------+----------+-----------------+
| 54d4e1b0776e4bc7a58bddc2a5e900a3 | 99d064ef-3038-4db9-a5da-d53e8cac7c4d | lb1 | | DELETED | ONLINE | 1 | SINGLE | NULL |
+----------------------------------+--------------------------------------+------+-------------+---------------------+------------------+---------+----------+-----------------+
1 row in set (0.00 sec)

mysql> select * from amphora;
+--------------------------------------+--------------------------------------+---------+--------------------------------------+---------------+----------+----------+--------------------------------------+--------------------------------------+------------+---------------------+-----------+----------------+---------+---------------+
| id | compute_id | status | load_balancer_id | lb_network_ip | vrrp_ip | ha_ip | vrrp_port_id | ha_port_id | role | cert_expiration | cert_busy | vrrp_interface | vrrp_id | vrrp_priority |
+--------------------------------------+--------------------------------------+---------+--------------------------------------+---------------+----------+----------+--------------------------------------+--------------------------------------+------------+---------------------+-----------+----------------+---------+---------------+
| a02bcedb-227e-47a2-bba4-47c58ca82e88 | 03ca5269-7f74-4670-92b8-68712cfb40cd | DELETED | 99d064ef-3038-4db9-a5da-d53e8cac7c4d | 192.168.0.4 | 10.0.0.4 | 10.0.0.3 | 2822dd69-1d31-403c-b3fb-ce9179d71371 | 9b5e041e-4b2a-4005-96e3-fcb263e421a9 | STANDALONE | 2018-04-22 17:41:07 | 0 | NULL | 1 | NULL |
+--------------------------------------+--------------------------------------+---------+--------------------------------------+---------------+----------+----------+--------------------------------------+--------------------------------------+------------+---------------------+-----------+----------------+---------+---------------+
1 row in set (0.00 sec)

mysql> select * from vip;
+--------------------------------------+------------+--------------------------------------+--------------------------------------+
| load_balancer_id | ip_address | port_id | subnet_id |
+--------------------------------------+------------+--------------------------------------+--------------------------------------+
| 99d064ef-3038-4db9-a5da-d53e8cac7c4d | 10.0.0.3 | 9b5e041e-4b2a-4005-96e3-fcb263e421a9 | 29e8791b-568c-4e68-afed-d0300488897b |
+--------------------------------------+------------+--------------------------------------+--------------------------------------+

Revision history for this message
Michael Johnson (johnsom) wrote :

The two records marked "deleted" I would expect. The VIP record concerns me. We need to figure out why these aren't getting cleaned up.

Changed in octavia:
importance: Undecided → High
Revision history for this message
Elena Ezhova (eezhova) wrote :

I also wonder why are load balancers just marked as DELETED in db instead of actually being deleted? If the original purpose was to leave their cleanup to housekeeper daemon (like it's currently done with amphorae) then we should add this to housekeeper's db_cleanup routine. But I'm not sure whether there is a need to offload this rather lightweight db task from controller worker.

In any case, deleting load balancers from db would trigger cascade delete of corresponding VIPs. [1]

[1] https://github.com/openstack/octavia/blob/master/octavia/db/models.py#L332-L334

Changed in octavia:
assignee: nobody → Elena Ezhova (eezhova)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to octavia (master)

Fix proposed to branch: master
Review: https://review.openstack.org/332913

Changed in octavia:
status: New → In Progress
Revision history for this message
Michael Johnson (johnsom) wrote :

Thanks Elena! Yes, we intended to mark these deleted and have the housekeeping manager clean them up to maintain a database record of the load balancer for a "grace" period in the database. This grace period is configurable so the operator can decide if they want to maintain "DELETED" records for a week, month, or not at all.
Thank you for catching this and putting up a patch.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to octavia (master)

Reviewed: https://review.openstack.org/332913
Committed: https://git.openstack.org/cgit/openstack/octavia/commit/?id=d73df70d850a927a7673e150ab6279a6bb77435f
Submitter: Jenkins
Branch: master

commit d73df70d850a927a7673e150ab6279a6bb77435f
Author: Elena Ezhova <email address hidden>
Date: Wed Jun 22 19:36:56 2016 +0300

    Cleanup deleted load balancers in housekeeper's db_cleanup

    When load balancer is deleted the corresponding DB entry is marked
    as DELETED and is never actually removed along with a VIP
    associated whit this load balancer.

    This adds a new method to db_cleanup routine that scans the DB for
    load balancers with DELETED provisioning_status and deletes them
    from db if they are older than load_balancer_expiry_age. Corresponding
    VIP entries are deleted in cascade.

    Added new config option `load_balancer_expiry_age` to the `house_keeping`
    config section.

    Also changed the default value of exp_age argument to
    CONF.house_keeping.amphora_expiry_age in check_amphora_expiry_age
    method.

    DocImpact
    Closes-Bug #1573725

    Change-Id: I4f99d38f44f218ac55a76ef062ed9ea401c0a02d

Changed in octavia:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.