WARNING: 23 Queue(s) with insufficient members
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Snap |
Triaged
|
Critical
|
Unassigned |
Bug Description
By following the multi-node scenario:
https:/
$ juju status -m openstack rabbitmq
Model Controller Cloud/Region Version SLA Timestamp
openstack sunbeam-controller sunbeam-
SAAS Status Store URL
microceph active local admin/controlle
App Version Status Scale Charm Channel Rev Address Exposed Message
rabbitmq 3.12.1 active 3 rabbitmq-k8s 3.12/stable 33 10.0.123.84 no WARNING: 23 Queue(s) with insufficient members
Unit Workload Agent Address Ports Message
rabbitmq/0* active idle 10.1.32.227 WARNING: 23 Queue(s) with insufficient members
rabbitmq/1 active idle 10.1.193.216
rabbitmq/2 active idle 10.1.186.27
The rabbitmq cluster status looks okay.
root@rabbitmq-0:/# rabbitmqctl cluster_status [33/994]
Cluster status of node <email address hidden> ...
Basics
Cluster name: <email address hidden>
Total CPU cores available cluster-wide: 48
Disk Nodes
<email address hidden>
<email address hidden>
<email address hidden>
Running Nodes
<email address hidden>
<email address hidden>
<email address hidden>
Versions
<email address hidden>: RabbitMQ 3.12.1 on Erlang 25.2.3
<email address hidden>: RabbitMQ 3.12.1 on Erlang 25.2.3
<email address hidden>: RabbitMQ 3.12.1 on Erlang 25.2.3
CPU Cores
Node: <email address hidden>, available CPU cores: 16
Node: <email address hidden>, available CPU cores: 16
Node: <email address hidden>, available CPU cores: 16
Maintenance status
Node: <email address hidden>, status: not under maintenance
Node: <email address hidden>, status: not under maintenance
Node: <email address hidden>, status: not under maintenance
Alarms
(none)
Network Partitions
(none)
...
Changed in snap-openstack: | |
status: | New → Triaged |
importance: | Undecided → Critical |
tags: | added: open-2198 |
https:/ /github. com/openstack- charmers/ charm-rabbitmq- k8s/blob/ 54d248a5741b0c1 9381e7058fba43c 981f1f0952/ src/charm. py#L891- L895
https:/ /github. com/openstack- charmers/ charm-rabbitmq- k8s/blob/ 54d248a5741b0c1 9381e7058fba43c 981f1f0952/ src/charm. py#L856- L864
Indeed, those queues are undersized after the sunbeam resize operation. e.g.
- arguments: capacity: 0 utilisation: 0 policy_ definition: {} collection: after: 65535 bin_vheap_ size: 46422 bytes_dlx: 0 bytes_persisten t: 0 bytes_ram: 0 bytes_ready: 0 bytes_unacknowl edged: 0 persistent: 0 ready_details: unacknowledged: 0 unacknowledged_ details: volume. cinder- ceph-0@ cinder- ceph details: active_ consumer_ tag: None
x-queue-type: quorum
auto_delete: false
consumer_
consumer_
consumers: 0
durable: true
effective_
exclusive: false
garbage_
fullsweep_
max_heap_size: 0
min_
min_heap_size: 233
minor_gcs: 5
leader: <email address hidden>
members:
- <email address hidden> ########## <- only one member
memory: 143172
message_bytes: 0
message_
message_
message_
message_
message_
messages: 0
messages_details:
rate: 0.0
messages_dlx: 0
messages_
messages_ram: 0
messages_ready: 0
messages_
rate: 0.0
messages_
messages_
rate: 0.0
name: cinder-
node: <email address hidden>
online:
- <email address hidden>
open_files:
<email address hidden>: 0
operator_policy: None
policy: None
reductions: 33414
reductions_
rate: 401.0
single_
state: running
type: quorum
vhost: openstack
Not sure why grow_queues_ onto_unit is not properly applied in this case. /github. com/openstack- charmers/ charm-rabbitmq- k8s/blob/ 54d248a5741b0c1 9381e7058fba43c 981f1f0952/ src/charm. py#L402- L425
https:/