[messaging] qpid driver leaks queues and exchanges
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
oslo-incubator |
Incomplete
|
Undecided
|
Unassigned |
Bug Description
When using the QPID transport, it seems that RPC client activity creates queues and exchanges on the broker for receiving replies from the RPC server. However, these temporary queues/exchanges are not deleted once the client exits. Over time, this will unnecessarily impact the broker's resources.
Here's an example, I've created a test RPC server that is listening for requests to "my-topic". Using the qpid-stat tool, I'll dump the currently active exchanges and queues on the server:
(pyenv)
Exchanges
exchange type dur bind msgIn msgOut msgDrop byteIn byteOut byteDrop
=====
amq.direct direct Y 0 0 0 0 0 0 0
amq.fanout fanout Y 0 0 0 0 0 0 0
amq.match headers Y 0 0 0 0 0 0 0
amq.topic topic Y 0 0 0 0 0 0 0
my-topic_fanout fanout 1 0 0 0 0 0 0
openstack topic Y 2 0 0 0 0 0 0
qmf.default.
qmf.default.topic topic 1 40 1 39 40.0k 6.29k 33.7k
qpid.management topic 0 0 0 0 0 0 0
(pyenv)
Queues
queue dur autoDel excl msg msgIn msgOut bytes bytesIn bytesOut cons bind
=====
039455f9-
my-topic 0 0 0 0 0 0 1 2
my-topic.SERVER1 0 0 0 0 0 0 1 2
my-topic_
The server has created a couple of exchanges and queues, as expected. Now if I run a simple client that performs one RPC call against that server:
(pyenv)
Calling server on topic my-topic, server=None exchange=
Method=methodA, args={'arg1': 'arg2'}
Return value=None
(pyenv)
Queues
queue dur autoDel excl msg msgIn msgOut bytes bytesIn bytesOut cons bind
=====
1134624d-
my-topic 0 1 1 0 393 393 1 2
my-topic.SERVER1 0 0 0 0 0 0 1 2
my-topic_
reply_
(pyenv)
Exchanges
exchange type dur bind msgIn msgOut msgDrop byteIn byteOut byteDrop
=====
amq.direct direct Y 0 0 0 0 0 0 0
amq.fanout fanout Y 0 0 0 0 0 0 0
amq.match headers Y 0 0 0 0 0 0 0
amq.topic topic Y 0 0 0 0 0 0 0
my-topic_fanout fanout 1 0 0 0 0 0 0
openstack topic Y 2 1 1 0 393 393 0
qmf.default.
qmf.default.topic topic 1 98 3 95 101k 21.6k 80.0k
qpid.management topic 0 0 0 0 0 0 0
reply_
The output from qpid-stat shows that a new exchange and queue have been created by the client, and remain after the client has exited. I suspect this is unintended - the names appear to be random UUIDs, which means they'll probably not be used again.
Rerunning the client a couple of more times:
(pyenv)
Queues
queue dur autoDel excl msg msgIn msgOut bytes bytesIn bytesOut cons bind
=====
0af107d8-
my-topic 0 4 4 0 1.57k 1.57k 1 2
my-topic.SERVER1 0 0 0 0 0 0 1 2
my-topic_
reply_
reply_
reply_
reply_
(pyenv)
Exchanges
exchange type dur bind msgIn msgOut msgDrop byteIn byteOut byteDrop
=====
amq.direct direct Y 0 0 0 0 0 0 0
amq.fanout fanout Y 0 0 0 0 0 0 0
amq.match headers Y 0 0 0 0 0 0 0
amq.topic topic Y 0 0 0 0 0 0 0
my-topic_fanout fanout 1 0 0 0 0 0 0
openstack topic Y 2 4 4 0 1.57k 1.57k 0
qmf.default.
qmf.default.topic topic 1 144 5 139 166k 42.3k 123k
qpid.management topic 0 0 0 0 0 0 0
reply_
reply_
reply_
reply_
Given that these queues & exchanges are not auto-delete, they'll have to be manually deleted.
Will this issue effect the qpid performance?
In our production env, after about 6 hours, it will have above 5w queues and exchanges have be generated. The qpid-config will have no response after about 10 hours, I have to restart qpidd to resolve it.
fffe613d84ff4 4709941379c5a52 6c07 direct Y 0 0 0 0 0 0 0 98293b91435b392 f63b direct Y 0 0 0 0 0 0 0 ad2a399e3368d25 67a7 direct Y 0 0 0 0 0 0 0 direct direct 0 1 1 0 0 0 0 controller- srv ~]# qpid-stat -e |wc -l
fffe6c89785b4
fffffeb96d664
heat topic Y 2 28 28 0 0 0 0
network_fanout fanout 10 0 0 0 0 0 0
nova topic Y 30 53.8k 53.8k 7 0 0 0
openstack topic Y 5 24 23 1 0 0 0
qmf.default.
qmf.default.topic topic 1 360k 4.30k 355k 0 0 0
qpid.management topic 0 358k 53 358k 0 0 0
scheduler_fanout fanout 1 3.94k 3.94k 0 0 0 0
[root@os-
53778
[root@os- controller- srv ~]# qpid-config
Total Exchanges: 54184
topic: 6
headers: 1
fanout: 10
direct: 54167
Total Queues: 66
durable: 0
non-durable: 66