As you can see from test results, consume never got blocked on raised RabbitMQ memory alert, only publish is blocked. So this issue with the swap increasing in uncontrolled way by beam.smp process is likely related to OpenStack apps keep pushing on rabbit cluster when it has declared blocked publish. The pressure on the memory may be done, for example, by new connections opening.
The recent patch should give us memory grow dynamics for queues, connections etc.
Folks, I tested how exactly blocked publish works when a rabbit node exceeds its high memory watermark. pastebin. com/49JisHRP , If you want to give a try, make sure you provided correct IP addresses and nova creds. And the sample generator itself https:/ /github. com/bogdando/ ceilometer/ raw/rmq_ bench/tools/ sample- generator. py , pastebin. com/EgYtzYuY
Here is the scripts: http://
it requires pika and python-ceilometer installed. The output of the test was http://
As you can see from test results, consume never got blocked on raised RabbitMQ memory alert, only publish is blocked. So this issue with the swap increasing in uncontrolled way by beam.smp process is likely related to OpenStack apps keep pushing on rabbit cluster when it has declared blocked publish. The pressure on the memory may be done, for example, by new connections opening.
The recent patch should give us memory grow dynamics for queues, connections etc.