The global amount of Octavia loadbalancers is constrained by the size of lb-mgmt-subnet
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
tripleo |
Fix Released
|
Medium
|
Nir Magnezi |
Bug Description
Description of problem:
=======
Octavia creates Amphorae (service VMs) under an operator configured project (tenant). In TripleO, we currently use 'service' project by default.
Each Amphora instance has its own tap device in a shared management subnet named lb-mgmt-subnet. That subnet is concealed under the 'service' project and cannot be accessed by non-privileged users.
TripleO creates that subnet during the Octavia deployment process. Currently, it is created as a class B subnet with allocation_pools that essentially limit the number of address in that subnet to 150.
Which means, globally for a given OpenStack deployment:
- 150 Amphorae ==> 150 Loadbalancers if the Amphora topology is SINGLE
- 150 Amphorae ==> 75 Loadbalancers if the Amphora topology is ACTIVE_STANDBY
Here's how it currently looks (snipped):
+------
| Field | Value |
+------
| allocation_pools | 192.168.
| cidr | 192.168.199.0/24 |
| created_at | 2018-05-
| enable_dhcp | True |
| gateway_ip | 192.168.199.1 |
| ip_version | 4 |
| name | lb-mgmt-subnet |
+------
Version-Release number of selected component (if applicable):
=======
OSP13 2018-05-10.3 openstack-
How reproducible:
=================
100%
Steps to Reproduce:
1. Deploy OpenStack with Octavia via TripleO
2.
3.
Actual results:
===============
As mentioned above.
Expected results:
=================
We should have a much larger subnet such as class B, so the global amount of Octavia loadbalancers won't constrained to a very low number.
Changed in tripleo: | |
milestone: | none → rocky-2 |
importance: | Undecided → Medium |
Current config: /github. com/openstack/ tripleo- common/ blob/5351366697 4657f120def7811 c95b36a30782b89 /playbooks/ roles/common/ defaults/ main.yml# L11-L14
https:/