lbaasv2 ports not updated on liberty to mikata upgrade

Bug #1712889 reported by Alex
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Won't Fix
Undecided
Unassigned

Bug Description

After migrating from liberty to mikata using:

https://docs.openstack.org/openstack-ansible/mitaka/upgrade-guide/manual-upgrade.html

all previous loadbalancers (using lbaasv2) do not start. No errors in neutron-lbaasv2-agent log and everything looks normal in cli and horizon. New loadbalancers start and work normal.

It appears that the hostname translation done in:

https://github.com/openstack/openstack-ansible/blob/13.3.18/scripts/upgrade-utilities/playbooks/rfc1034_1035-cleanup.yml

does not apply to lbaasv2 ports. So currently we have ports:

root@control1-utility-container-cd796ab4:~# neutron port-list -c device_owner -c binding:host_id | grep LOADBA
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network2_neutron_agents_container-4c4e8121 |
| neutron:LOADBALANCERV2 | network2_neutron_agents_container-4c4e8121 |
| neutron:LOADBALANCERV2 | network2_neutron_agents_container-4c4e8121 |
| neutron:LOADBALANCERV2 | network2_neutron_agents_container-4c4e8121 |
| neutron:LOADBALANCERV2 | network2-neutron-agents-container-4c4e8121 |
| neutron:LOADBALANCERV2 | network2_neutron_agents_container-4c4e8121 |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network2_neutron_agents_container-4c4e8121 |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network2_neutron_agents_container-4c4e8121 |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
| neutron:LOADBALANCERV2 | network1_neutron_agents_container-1728f97f |
<snip>

where you can see a single entry has underscores replaced:

| neutron:LOADBALANCERV2 | network2-neutron-agents-container-4c4e8121 |

and that's the one created after migration and one that works.

I'm happy to look at a patch to rfc playbook if you can confirm that's correct and provide some guidance on best way to fix (manual sql change, or use neutron command)?

Revision history for this message
Alex (akrohn) wrote :

Ran:

update ml2_port_bindings set host=replace(host, "_","-");
update ml2_port_binding_levels set host=replace(host, "_","-");

on neutron db to fixup hosts, but haproxy is still not launching on old ones. Did a comparison of output of loadbalancer, pool, members, port, subnet of a working and non working and only difference that stands out is working now has created_at and updated_at fields on the port (new in mikata).

Still digging.

Revision history for this message
Alex (akrohn) wrote :

We weren't able to find root cause. We tried to delete the loadbalancers, but that failed and loadbalancer was stuck in PENDING_UPDATE provisioning_status and pool stuck in PENDING_DELETE provisioning_status.

After manually deleting the references out of the database, we could re-create the new lb's without issue.

So not sure if worth doing anything here. =)

Revision history for this message
Jean-Philippe Evrard (jean-philippe-evrard) wrote :

We can't fix anything in mitaka, as mitaka is end of life.

Is there anything impacting for upper versions?

Changed in openstack-ansible:
status: New → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.