To recover this, we will need to do the following:
Login to the RabbitMQ docker on all the controllers, backup the directory “/var/lib/rabbitmq/mnesia/contrail@<Node_Name>” and delete it.
Once this folder is deleted in all the RabbitMQ dockers, Stop the rabbitmq docker on all the controllers. Start the rabbitmq docker in only one of the controller.
Wait for about 10 secs, and restart the rabbitmq docker on other two controller.
To verify that the rabbitmq cluster is correct, execute the following command and verify that all three nodes are present in the field “running_nodes” :
root@Config2:~/mnesia/contrail@Config2# rabbitmqctl cluster_status
Cluster status of node contrail@Config2
[{nodes,[{disc,[contrail@Config1,contrail@Config2,contrail@Config3]}]},
{running_nodes,[contrail@Config3,contrail@Config1,contrail@Config2]},
{cluster_name,<<"<email address hidden>">>},
{partitions,[]},
{alarms,[{contrail@Config3,[]},{contrail@Config1,[]},{contrail@Config2,[]}]}]
Notes:
k8s pod creation fails after config api restart
To recover this, we will need to do the following: rabbitmq/ mnesia/ contrail@ <Node_Name> ” and delete it. ~/mnesia/ contrail@ Config2# rabbitmqctl cluster_status [{disc, [contrail@ Config1, contrail@ Config2, contrail@ Config3] }]}, nodes,[ contrail@ Config3, contrail@ Config1, contrail@ Config2] }, name,<< "<email address hidden>">>}, [{contrail@ Config3, []},{contrail@ Config1, []},{contrail@ Config2, []}]}]
Login to the RabbitMQ docker on all the controllers, backup the directory “/var/lib/
Once this folder is deleted in all the RabbitMQ dockers, Stop the rabbitmq docker on all the controllers. Start the rabbitmq docker in only one of the controller.
Wait for about 10 secs, and restart the rabbitmq docker on other two controller.
To verify that the rabbitmq cluster is correct, execute the following command and verify that all three nodes are present in the field “running_nodes” :
root@Config2:
Cluster status of node contrail@Config2
[{nodes,
{running_
{cluster_
{partitions,[]},
{alarms,