The global amount of Octavia loadbalancers is constrained by the service project quotas

Bug #1769896 reported by Nir Magnezi
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tripleo
Fix Released
Medium
Brent Eagles

Bug Description

First discovered in https://bugzilla.redhat.com/show_bug.cgi?id=1560422

Octavia creates Amphorae (service VMs) under an operator configured project (tenant). In TripleO, we currently use 'service' project by default.

Since booting Amphorae consume the project quota, this essentially result a very low (around 10) global upper constraint for loadbalancers amount.

This is in opposed to how quotas normally being used in openstack.

To conclude:
When user 'test' creates a loadbalancer in project 'xyz':
1. Loadbalancer related quota is being consumed for project
xyz' (expected).
2. ports, cores, instances, ram and security groups quota is being consumed for project 'service', which will eventually prevent users from *any* project to create loadbalancers. Even if they did not fully consume their loadbalancer related project quota.

To fix this:
We need to address the 'service' project as a system project. Thus, we cannot be limited by project quotas for Octavia VMs.
We need to set '-1' for the following quotas (in the 'service project only):
1. ports
2. cores
3. instances
4. ram
5. security groups

Revision history for this message
Nir Magnezi (nmagnezi) wrote :
Download full text (5.7 KiB)

2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server [-] Exception during message handling: OverQuotaClient: Quota exceeded for resources: ['security_group'].
Neutron server returns request_ids: ['req-a80c6e5c-e8ff-4b59-91c7-5c8e21c5b76e']
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/queue/endpoint.py", line 44, in create_load_balancer
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server self.worker.create_load_balancer(load_balancer_id)
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/worker/controller_worker.py", line 284, in create_load_balancer
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server create_lb_tf.run()
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 247, in run
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server for _state in self.run_iter(timeout=timeout):
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server failure.Failure.reraise_if_any(er_failures)
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 336, in reraise_if_any
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server failures[0].reraise()
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 343, in reraise
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server six.reraise(*self._exc_info)
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server result = task.execute(**arguments)
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/network_tasks.py", line 278, in execute
2018-05-08 12:45:40.090 21 ERROR oslo_messaging.r...

Read more...

Revision history for this message
Nir Magnezi (nmagnezi) wrote :

Current default quota numbers:

ports 500
cores 20
instances 10
ram 51200
security groups 10

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to tripleo-common (master)

Fix proposed to branch: master
Review: https://review.openstack.org/567641

Changed in tripleo:
assignee: nobody → Brent Eagles (beagles)
status: New → In Progress
Changed in tripleo:
importance: Undecided → Medium
milestone: none → rocky-2
Changed in tripleo:
assignee: Brent Eagles (beagles) → Carlos Goncalves (cgoncalves)
Brent Eagles (beagles)
Changed in tripleo:
assignee: Carlos Goncalves (cgoncalves) → Brent Eagles (beagles)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to tripleo-common (master)

Reviewed: https://review.openstack.org/567641
Committed: https://git.openstack.org/cgit/openstack/tripleo-common/commit/?id=287f110d187b7fe508858e157d9c9ddb6b398f1e
Submitter: Zuul
Branch: master

commit 287f110d187b7fe508858e157d9c9ddb6b398f1e
Author: Brent Eagles <email address hidden>
Date: Thu May 10 13:59:47 2018 -0230

    Increase services project quotas when deploying Octavia

    Octavia currently launches load balancer VMs in the services tenant
    which has very low quotas by default. This patch increases the values
    through the API for the services project to permit lots of load
    balancers.

    Closes Bug: #1769896

    Change-Id: I6b87b147d301dea3251fe4509b04cd4b9b27ddba

Changed in tripleo:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to tripleo-common (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.openstack.org/569367

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to tripleo-common (stable/queens)

Reviewed: https://review.openstack.org/569367
Committed: https://git.openstack.org/cgit/openstack/tripleo-common/commit/?id=f80838baab4f586f13696c60da39eeafb22aaedd
Submitter: Zuul
Branch: stable/queens

commit f80838baab4f586f13696c60da39eeafb22aaedd
Author: Brent Eagles <email address hidden>
Date: Thu May 10 13:59:47 2018 -0230

    Increase services project quotas when deploying Octavia

    Octavia currently launches load balancer VMs in the services tenant
    which has very low quotas by default. This patch increases the values
    through the API for the services project to permit lots of load
    balancers.

    Closes Bug: #1769896

    Change-Id: I6b87b147d301dea3251fe4509b04cd4b9b27ddba
    (cherry picked from commit 287f110d187b7fe508858e157d9c9ddb6b398f1e)

tags: added: in-stable-queens
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to tripleo-common (master)

Fix proposed to branch: master
Review: https://review.openstack.org/570758

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to tripleo-common (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.openstack.org/570759

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to tripleo-common (master)

Reviewed: https://review.openstack.org/570758
Committed: https://git.openstack.org/cgit/openstack/tripleo-common/commit/?id=c27024a2c8cd00311ddd580f394f7c85872326a8
Submitter: Zuul
Branch: master

commit c27024a2c8cd00311ddd580f394f7c85872326a8
Author: Nir Magnezi <email address hidden>
Date: Mon May 28 12:01:24 2018 +0300

    Increase services project secgroup-rules quotas when deploying Octavia

    This patch is a followup to I6b87b147d301dea3251fe4509b04cd4b9b27ddba,
    which lacked the secgroup-rules quota.

    The reasoning is the same:
    Octavia currently launches load balancer VMs in the services tenant
    which has very low quotas by default. This patch increases the values
    through the API for the services project to permit lots of load
    balancers.

    Closes Bug: #1769896

    Change-Id: I8ad5175f7ca79939725a0c2342378a052b5c6bd2

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to tripleo-common (stable/queens)

Reviewed: https://review.openstack.org/570759
Committed: https://git.openstack.org/cgit/openstack/tripleo-common/commit/?id=49956b826064ae0bb336235c32bf49573bdf5ef0
Submitter: Zuul
Branch: stable/queens

commit 49956b826064ae0bb336235c32bf49573bdf5ef0
Author: Nir Magnezi <email address hidden>
Date: Mon May 28 12:01:24 2018 +0300

    Increase services project secgroup-rules quotas when deploying Octavia

    This patch is a followup to I6b87b147d301dea3251fe4509b04cd4b9b27ddba,
    which lacked the secgroup-rules quota.

    The reasoning is the same:
    Octavia currently launches load balancer VMs in the services tenant
    which has very low quotas by default. This patch increases the values
    through the API for the services project to permit lots of load
    balancers.

    Closes Bug: #1769896

    Change-Id: I8ad5175f7ca79939725a0c2342378a052b5c6bd2
    (cherry picked from commit c27024a2c8cd00311ddd580f394f7c85872326a8)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/tripleo-common 8.6.2

This issue was fixed in the openstack/tripleo-common 8.6.2 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/tripleo-common 9.1.0

This issue was fixed in the openstack/tripleo-common 9.1.0 release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.