Do not define/configure lbaasv2 agents when deploying Octavia on Rocky+
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Neutron Gateway Charm |
Fix Released
|
Medium
|
Unassigned |
Bug Description
In a typical deployment, neutron-gateway charm installs and configures neutron-
neutron-gateway should get a relation to octavia as an lbaas-provider to update the neutron-lbaas.conf file with an octavia appropriate configuration and drop the neutron-
These installs also have a very large backup in rabbit queue n-lbaasv2-plugin from lbaasv2-agent trying to call the haproxy backend.
Reference cloud is Foundation Cloud running bionic-rocky on 19.04 charms with octavia enabled in lxd on juju 2.5.4/MAAS 2.5.x.
Changed in charm-neutron-gateway: | |
status: | New → Triaged |
importance: | Undecided → Medium |
Changed in charm-neutron-gateway: | |
milestone: | none → 20.08 |
Changed in charm-neutron-gateway: | |
status: | Fix Committed → Fix Released |
The behavior of keeping the lbaasv2-agent around on the ``neutron-gateway`` is intentional to support deployments migrating from the Neutron built in agent to Octavia.
When you add Octavia to the deployment and relate it to the ``neutron-api`` charm the Neutron API will be configured with the ``lbaasv2-proxy`` service which forwards any LBaaS requests to the Octavia API.
So in short after Octavia has been deployed all new loadbalancers will be created by Octavia and any existing load balancers already running on the gateways will be left untouched.
I can see this behavior might be unwanted when deploying a fresh cloud and we do intend to remove it altogether when the Neutron built-in support is removed upstream.
Could you share some more detail about what side effects you get from having it enabled and unused on the gateways, is it the message queue buildup the main issue or are there others too?