a lot of messages in scheduler rabbitmq queue during create-and-delete-volume rally scenario
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Mirantis OpenStack |
Fix Released
|
High
|
Alex Schultz | ||
7.0.x |
Fix Released
|
High
|
Denis Meltsaykin | ||
8.0.x |
Fix Released
|
High
|
Alex Schultz |
Bug Description
Steps to reproduce:
1. Install MOS environment with 3 controllers
2. Execute
service nova-scheduler restart
service cinder-scheduler restart
on one of the controllers.
3. Execute 'pcs resource' on the same controller and in its output find a slave instance of RabbitMQ.
4. Log into controller where the found slave instance runs and kill RabbitMQ here with 'kill' command.
5. Once Pacemaker restores killed RabbitMQ, check output of
rabbitmqctl list_queues messages consumers name | grep scheduler_fanout
You will find that there are one 'cinder-
You may find state of the queues in that paste: http://
===== RCA
The issue is caused by CR https:/
===== Initial description from Scale team
During create-
root@node-100:~# rabbitmqctl list_queues | grep -v 0$
Listing queues ...
cinder-
cinder-
cinder-
reply_ba0c220da
scheduler_
scheduler_
scheduler_
Cluster configuration:
Baremetal,
Controllers:3 Computes:178 Copmutes+Ceph:20 LMA:2
api: '1.0'
astute_sha: 6c5b73f93e24cc7
auth_required: true
build_id: '298'
build_number: '298'
feature_groups:
- mirantis
fuel-agent_sha: 082a47bf014002e
fuel-library_sha: 0623b4daad438ce
fuel-nailgun-
fuel-ostf_sha: 1f08e6e71021179
fuelmain_sha: 6b83d6a6a75bf7b
nailgun_sha: d590b26dbb09785
openstack_version: 2015.1.0-7.0
production: docker
python-
release: '7.0'
Diagnostic Snapshot: http://
Changed in mos: | |
milestone: | 8.0-updates → 8.0 |
tags: | added: oslo.messaging |
summary: |
- a lot of messages in cinder-scheduler rabbitmq queue during create-and- - delete-volume rally scenario + a lot of messages in scheduler rabbitmq queue during create-and-delete- + volume rally scenario |
tags: | added: cinder nova |
description: | updated |
description: | updated |
tags: | added: customer-found support |
tags: | added: on-verification |
tags: | removed: on-verification |
tags: | added: on-verification |
tags: | added: on-verification |
The scheduler_fanout are more in number and i believe they belong to Nova. So Nova team should look at this as well