has been reproduced during nova/boot_and_delete_server_with_secgroups rally scenario. Test has been failed on 914 iteration: http://paste.openstack.org/show/324559/
before tests we perform "pcs resource unmanage" command for rabbitmq pacemaker resource.
rabbitmqctl list_queues hangs.
node-288:
from log:
=INFO REPORT==== 29-Jun-2015::00:56:57 ===
rabbit on node 'rabbit@node-404' down
[root@node-288 ~]# rabbitmqctl cluster_status
Cluster status of node 'rabbit@node-288' ...
[{nodes,[{disc,['rabbit@node-288','rabbit@node-403','rabbit@node-404']}]},
{running_nodes,['rabbit@node-403','rabbit@node-288']},
{cluster_name,<<"<email address hidden>">>},
{partitions,[]}]
...done.
node-403:
from log: http://paste.openstack.org/show/324531/
[root@node-403 ~]# rabbitmqctl cluster_status
Cluster status of node 'rabbit@node-403' ...
[{nodes,[{disc,['rabbit@node-288','rabbit@node-403','rabbit@node-404']}]},
{running_nodes,['rabbit@node-404','rabbit@node-288','rabbit@node-403']},
{cluster_name,<<"<email address hidden>">>},
{partitions,[]}]
...done.
node-404:
from log:
=INFO REPORT==== 29-Jun-2015::00:56:57 ===
rabbit on node 'rabbit@node-288' down
[root@node-404 ~]# rabbitmqctl cluster_status
Cluster status of node 'rabbit@node-404' ...
[{nodes,[{disc,['rabbit@node-288','rabbit@node-403','rabbit@node-404']}]},
{running_nodes,['rabbit@node-403','rabbit@node-404']},
{cluster_name,<<"<email address hidden>">>},
{partitions,[]}]
...done.
has been reproduced during nova/boot_ and_delete_ server_ with_secgroups rally scenario. Test has been failed on 914 iteration: paste.openstack .org/show/ 324559/
http://
before tests we perform "pcs resource unmanage" command for rabbitmq pacemaker resource.
rabbitmqctl list_queues hangs.
node-288: 2015::00: 56:57 === [{disc, ['rabbit@ node-288' ,'rabbit@ node-403' ,'rabbit@ node-404' ]}]}, nodes,[ 'rabbit@ node-403' ,'rabbit@ node-288' ]}, name,<< "<email address hidden>">>}, paste.openstack .org/show/ 324531/ [{disc, ['rabbit@ node-288' ,'rabbit@ node-403' ,'rabbit@ node-404' ]}]}, nodes,[ 'rabbit@ node-404' ,'rabbit@ node-288' ,'rabbit@ node-403' ]}, name,<< "<email address hidden>">>}, 2015::00: 56:57 === [{disc, ['rabbit@ node-288' ,'rabbit@ node-403' ,'rabbit@ node-404' ]}]}, nodes,[ 'rabbit@ node-403' ,'rabbit@ node-404' ]}, name,<< "<email address hidden>">>},
from log:
=INFO REPORT==== 29-Jun-
rabbit on node 'rabbit@node-404' down
[root@node-288 ~]# rabbitmqctl cluster_status
Cluster status of node 'rabbit@node-288' ...
[{nodes,
{running_
{cluster_
{partitions,[]}]
...done.
node-403:
from log: http://
[root@node-403 ~]# rabbitmqctl cluster_status
Cluster status of node 'rabbit@node-403' ...
[{nodes,
{running_
{cluster_
{partitions,[]}]
...done.
node-404:
from log:
=INFO REPORT==== 29-Jun-
rabbit on node 'rabbit@node-288' down
[root@node-404 ~]# rabbitmqctl cluster_status
Cluster status of node 'rabbit@node-404' ...
[{nodes,
{running_
{cluster_
{partitions,[]}]
...done.
atops from controller nodes at the time when rabbitmq was broken paste.openstack .org/show/ 325652/ paste.openstack .org/show/ 325653/ paste.openstack .org/show/ 325654/
atop on node-288: http://
atop on node-403: http://
atop on node-404: http://
** Reason for termination ==
[{rabbit_ mirror_ queue_slave, forget_ sender, [down_from_ ch,down_ from_ch] },
{rabbit_ mirror_ queue_slave, maybe_forget_ sender, 3},
{rabbit_ mirror_ queue_slave, handle_ info,2} ,
{gen_server2, handle_ msg,2},
{proc_ lib,wake_ up,3}]}
** {function_clause,
rabbitmq_statistics is attached mos-scale- share.mirantis. com/fuel- snapshot- 2015-06- 29_15-38- 26.tar. xz
diagnostic snapshot here: http://