amqp1.0 driver - memory increase in RPC client
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
oslo.messaging |
Expired
|
Undecided
|
Unassigned |
Bug Description
Using ombt[1] I observed a memory increase on the client side during different benchmarks.
I reproduced this observation with a small setup of :
- 1 RPC client sending messages at full rate 10^6 messages
- 5 RPC servers
- qpid-dispatch-
* oslo.messaging=
* pyngus==2.2.2
* python-
It appears that the memory increases in both rpc-calls and rpc-casts tests.
I attached the rpc-cast case, showing the RSS memory of the RPC client in the above configuration.
Possible steps to reproduce locally using ombt-orchestrat
$) ./cli.py deploy --driver=router vagrant
$) ./cli.py test_case_1 --nbr_clients 1 --nbr_servers 5 --nbr_calls 1000000 --timeout 8000
[1]: https:/
[2]: https:/
Changed in oslo.messaging: | |
assignee: | nobody → Ken Giusti (kgiusti) |
Changed in oslo.messaging: | |
assignee: | Ken Giusti (kgiusti) → nobody |
status: | New → Incomplete |
Just a notice that we packaged the orchestrator so the command line interface changed a bit:
For ombt-orchestrator 1.0.1:
$) oo deploy --driver=router vagrant
$) oo test_case_1 --nbr_clients 1 --nbr_servers 5 --nbr_calls 1000000 --timeout 8000