we tested OpenStack clouds under the load with different count of API workers and different count of OpenStack controller nodes and we found that good numbers are the following:
We need to run 10-30 workers on each controller (count of workers depends on controller node configuration but it should be not more than 30, because in this case we will have degradation of performance - workers will be blocked by simultaneous operations with database/rabbit/other resources)
To scale the cloud we need to change the count of OpenStack controller nodes - for example, if we have 3 controllers and cloud can handle 300 users in parallel, we can add 2 controllers (to get 5 controllers) and this cloud will support more users, which can work in parallel.
Hi,
we tested OpenStack clouds under the load with different count of API workers and different count of OpenStack controller nodes and we found that good numbers are the following:
We need to run 10-30 workers on each controller (count of workers depends on controller node configuration but it should be not more than 30, because in this case we will have degradation of performance - workers will be blocked by simultaneous operations with database/ rabbit/ other resources)
To scale the cloud we need to change the count of OpenStack controller nodes - for example, if we have 3 controllers and cloud can handle 300 users in parallel, we can add 2 controllers (to get 5 controllers) and this cloud will support more users, which can work in parallel.