Topic Name used as Queue Name for QPID Consumers

Bug #1356226 reported by Ma Wen Cheng
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
oslo.messaging
Fix Released
Undecided
Mehdi Abaakouk

Bug Description

        Currently a couple of PowerVC Topic Consumers are directly using QPID client calls to subscribe and listen for messages sent to a given topic, but given the planned switch to RabbitMQ and the fact we might be in limbo for a while, these consumers were trying to switch to use oslo.messaging rather than direct QPID calls in order to minimize the churn when toggling back and forth .

      When switching over though they ran into an issue where it seemed like only a subset of the messages were getting through. In debugging through this, it looks like the issue is that in the impl_qpid.py when it constructs a TopicConsumer, it is specifying the topic name as the "queue name" (if the queue name isn't passed in, which the way the oslo messaging code is setup it could never be passed in by the caller).

         The problem with this is that it means that every consumer of a given topic would end up using the same queue in qpid, and since there is no requeueing support provided, this means that only the first one to read the message will get that message and no others will.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to oslo.messaging (master)

Fix proposed to branch: master
Review: https://review.openstack.org/113808

Changed in oslo.messaging:
assignee: nobody → Ma Wen Cheng (mars914)
status: New → In Progress
Revision history for this message
Ken Giusti (kgiusti) wrote :

Actually, the behavior you are seeing is by design. oslo.messaging's "TopicConsumer" object is used to provide two different message consumption patterns:

1) Arriving messages are load-balanced across multiple subscribers to the same topic.
2) the Message is sent to a particular server for a given topic.

I think the confusion is the #1 case above - the term "Topic" really doesn't apply to this pattern (in the traditional JMS Topic sense). This pattern is actually better described as a shared queue.

#2 is more 'topic like' in that the server's name is appending to the subscribed address.

I think the behavior you want - all subscribers get a copy of the message - is implemented by the FanoutConsumer object.

In any case, take a look at the amqpdriver.listen() method. This is where a server sets up its subscriptions using the Topic and Fanout Consumer objects.

Changed in oslo.messaging:
assignee: Ma Wen Cheng (mars914) → Terry Yao (yaohaif)
Changed in oslo.messaging:
assignee: Terry Yao (yaohaif) → Mehdi Abaakouk (sileht)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on oslo.messaging (master)

Change abandoned by Ma Wen Cheng (<email address hidden>) on branch: master
Review: https://review.openstack.org/113808
Reason: abandon this patch due to https://review.openstack.org/#/c/125112/ is tracking the issue

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to oslo.messaging (master)

Reviewed: https://review.openstack.org/125112
Committed: https://git.openstack.org/cgit/openstack/oslo.messaging/commit/?id=30e0aea8775744733571f48b7bac3c8ea8b1b9d0
Submitter: Jenkins
Branch: master

commit 30e0aea8775744733571f48b7bac3c8ea8b1b9d0
Author: Mehdi Abaakouk <email address hidden>
Date: Tue Sep 30 13:45:40 2014 +0200

    Notification listener pools

    We can now set the pool name of a notification listener
    to create multiple groups/pools of listeners consuming notifications
    and that each group/pool only receives one copy of each notification.

    The AMQP implementation of that is to set queue_name with the pool name.

    Implements blueprint notification-listener-pools
    Closes-bug: #1356226

    Change-Id: I8dc0549f5550f684a261c78c58737b798fcdd656

Changed in oslo.messaging:
status: In Progress → Fix Committed
Mehdi Abaakouk (sileht)
Changed in oslo.messaging:
milestone: none → 1.5.0
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.