2016-04-14 16:33:34 |
Kirill Bespalov |
bug |
|
|
added bug |
2016-04-14 16:33:55 |
Kirill Bespalov |
ceilometer: assignee |
|
Kirill Bespalov (k-besplv) |
|
2016-04-14 16:43:26 |
Kirill Bespalov |
description |
Notification agent in Ceilometer for each target creates its own NotificationListener, but it allow by design to listen on many targets simultaneously. It leads to perfomance overhead for the agent and overloading an amqp broker.
You can explore current implementation here:
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L246
For example by default Ceilometer uses 60 targets: 6 sources (cpu_source, meter_source, etc) x 10 processing queues
for coordination stuff (IPC queues mechanisms):
ceilometer-pipe-cpu_source:cpu_delta_sink-0.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-1.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-2.sample
...
ceilometer-pipe-cpu_source:cpu_delta_sink-9.sample
...
So, after initialization will be run 2 workers proccess of notification agents, then each agent will be create 60 NotificationListener's and for each listener will be created a dedicated TCP connection to the amqp broker and it's own thread pool executor (default size 64).
As a result, every controller node with the notification agen has around 120 TCP connections to the amqp broker and 7680 started greenthreads. It's a problem. |
Notification agent in Ceilometer for each target creates its own NotificationListener, but it allow by design to listen on many targets simultaneously.
It leads to perfomance overhead for the agent and overloading an amqp broker.
You can explore current implementation here:
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L246
For example by default Ceilometer uses 60 targets:
6 sources (cpu_source, meter_source, etc) x 10 processing queues for coordination stuff (IPC queues mechanisms):
ceilometer-pipe-cpu_source:cpu_delta_sink-0.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-1.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-2.sample
...
ceilometer-pipe-cpu_source:cpu_delta_sink-9.sample
...
So, after initialization will be run 2 workers proccess of notification agents, then each agent will be create 60 NotificationListener's and for each listener will be created a dedicated TCP connection to the amqp broker and it's own thread pool executor (default size 64).
As a result, every controller node with the notification agen has around 120 TCP connections to the amqp broker and 7680 started greenthreads. It's a problem. |
|
2016-04-14 16:44:21 |
Kirill Bespalov |
description |
Notification agent in Ceilometer for each target creates its own NotificationListener, but it allow by design to listen on many targets simultaneously.
It leads to perfomance overhead for the agent and overloading an amqp broker.
You can explore current implementation here:
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L246
For example by default Ceilometer uses 60 targets:
6 sources (cpu_source, meter_source, etc) x 10 processing queues for coordination stuff (IPC queues mechanisms):
ceilometer-pipe-cpu_source:cpu_delta_sink-0.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-1.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-2.sample
...
ceilometer-pipe-cpu_source:cpu_delta_sink-9.sample
...
So, after initialization will be run 2 workers proccess of notification agents, then each agent will be create 60 NotificationListener's and for each listener will be created a dedicated TCP connection to the amqp broker and it's own thread pool executor (default size 64).
As a result, every controller node with the notification agen has around 120 TCP connections to the amqp broker and 7680 started greenthreads. It's a problem. |
Notification agent in the Ceilometer for each target creates its own NotificationListener, but it allow by design to listen on many targets simultaneously.
It leads to perfomance overhead for the agent and overloading an amqp broker.
You can explore current implementation here:
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L246
For example by default Ceilometer uses 60 targets:
6 sources (cpu_source, meter_source, etc) x 10 processing queues for coordination stuff (IPC queues mechanisms):
ceilometer-pipe-cpu_source:cpu_delta_sink-0.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-1.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-2.sample
...
ceilometer-pipe-cpu_source:cpu_delta_sink-9.sample
...
So, after initialization will be run 2 workers proccess of notification agents, then each agent will be create 60 NotificationListener's and for each listener will be created a dedicated TCP connection to the amqp broker and it's own thread pool executor (default size 64).
As a result, every controller node with the notification agen has around 120 TCP connections to the amqp broker and 7680 started greenthreads. It's a problem. |
|
2016-04-14 16:51:04 |
Chris Dent |
bug |
|
|
added subscriber Chris Dent |
2016-04-14 18:08:10 |
Kirill Bespalov |
description |
Notification agent in the Ceilometer for each target creates its own NotificationListener, but it allow by design to listen on many targets simultaneously.
It leads to perfomance overhead for the agent and overloading an amqp broker.
You can explore current implementation here:
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L246
For example by default Ceilometer uses 60 targets:
6 sources (cpu_source, meter_source, etc) x 10 processing queues for coordination stuff (IPC queues mechanisms):
ceilometer-pipe-cpu_source:cpu_delta_sink-0.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-1.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-2.sample
...
ceilometer-pipe-cpu_source:cpu_delta_sink-9.sample
...
So, after initialization will be run 2 workers proccess of notification agents, then each agent will be create 60 NotificationListener's and for each listener will be created a dedicated TCP connection to the amqp broker and it's own thread pool executor (default size 64).
As a result, every controller node with the notification agen has around 120 TCP connections to the amqp broker and 7680 started greenthreads. It's a problem. |
Notification agent in the Ceilometer for each target creates its own NotificationListener, but it allow by design to listen on many targets simultaneously.
It leads to perfomance overhead for the agent and overloading an amqp broker.
You can explore current implementation here:
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L246
For example by default Ceilometer uses 60 targets:
6 sources (cpu_source, meter_source, etc) x 10 processing queues for coordination stuff (IPC queues mechanisms):
ceilometer-pipe-cpu_source:cpu_delta_sink-0.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-1.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-2.sample
...
ceilometer-pipe-cpu_source:cpu_delta_sink-9.sample
...
So, after initialization will be run 2 workers proccess of notification agents, then each agent will be create 60 NotificationListener's and for each listener will be created a dedicated TCP connection to the amqp broker and it's own thread pool executor (default size 64).
As a result, every controller node with the notification agen has around 120 TCP connections to the amqp broker and 7680 started greenthreads. |
|
2016-04-14 18:10:43 |
Kirill Bespalov |
description |
Notification agent in the Ceilometer for each target creates its own NotificationListener, but it allow by design to listen on many targets simultaneously.
It leads to perfomance overhead for the agent and overloading an amqp broker.
You can explore current implementation here:
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L246
For example by default Ceilometer uses 60 targets:
6 sources (cpu_source, meter_source, etc) x 10 processing queues for coordination stuff (IPC queues mechanisms):
ceilometer-pipe-cpu_source:cpu_delta_sink-0.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-1.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-2.sample
...
ceilometer-pipe-cpu_source:cpu_delta_sink-9.sample
...
So, after initialization will be run 2 workers proccess of notification agents, then each agent will be create 60 NotificationListener's and for each listener will be created a dedicated TCP connection to the amqp broker and it's own thread pool executor (default size 64).
As a result, every controller node with the notification agen has around 120 TCP connections to the amqp broker and 7680 started greenthreads. |
Notification agent in the Ceilometer for each target creates its own NotificationListener, but it allow by design to listen on many targets simultaneously.
It leads to perfomance overhead for the agent and overloading an amqp broker.
You can explore current implementation here:
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L246
For example by default Ceilometer uses 60 targets:
6 sources (cpu_source, meter_source, etc) x 10 processing queues for coordination stuff (IPC queues mechanisms):
ceilometer-pipe-cpu_source:cpu_delta_sink-0.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-1.sample
ceilometer-pipe-cpu_source:cpu_delta_sink-2.sample
...
ceilometer-pipe-cpu_source:cpu_delta_sink-9.sample
...
So, after initialization will be run 2 workers proccess of notification agents, then each worker will be create 60 NotificationListener's and for each listener will be created a dedicated TCP connection to the amqp broker and it's own thread pool executor (default size 64).
As a result, every controller node with the notification agen has around 120 TCP connections to the amqp broker and 7680 started greenthreads. |
|
2016-04-14 19:03:35 |
OpenStack Infra |
ceilometer: status |
New |
In Progress |
|
2016-04-14 20:52:03 |
gordon chung |
ceilometer: importance |
Undecided |
High |
|
2016-04-14 20:52:19 |
gordon chung |
nominated for series |
|
ceilometer/liberty |
|
2016-04-14 20:52:19 |
gordon chung |
bug task added |
|
ceilometer/liberty |
|
2016-04-14 20:52:19 |
gordon chung |
nominated for series |
|
ceilometer/mitaka |
|
2016-04-14 20:52:19 |
gordon chung |
bug task added |
|
ceilometer/mitaka |
|
2016-04-15 09:12:09 |
Kirill Bespalov |
ceilometer/liberty: assignee |
|
Kirill Bespalov (k-besplv) |
|
2016-04-28 22:20:52 |
OpenStack Infra |
ceilometer: status |
In Progress |
Fix Released |
|
2016-05-09 15:33:52 |
OpenStack Infra |
tags |
|
in-stable-mitaka |
|