2015-06-10 12:01:13 |
Artem Panchenko |
bug |
|
|
added bug |
2015-06-10 13:05:02 |
Roman Podoliaka |
bug task added |
|
mos |
|
2015-06-10 13:05:23 |
Roman Podoliaka |
mos: assignee |
|
MOS Oslo (mos-oslo) |
|
2015-06-10 13:05:30 |
Roman Podoliaka |
mos: milestone |
|
6.1 |
|
2015-06-10 13:05:33 |
Roman Podoliaka |
mos: importance |
Undecided |
High |
|
2015-06-10 13:05:38 |
Roman Podoliaka |
bug task deleted |
fuel |
|
|
2015-06-10 13:06:14 |
Roman Podoliaka |
summary |
Nova can't boot instances after primary controller graceful shutdown 'MessagingTimeout: Timed out waiting for a reply to message ID xxx' |
RPC clients do not recreate a reply queue after restart of the last RabbitMQ server in the cluster |
|
2015-06-10 13:06:17 |
Roman Podoliaka |
description |
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
|
2015-06-10 13:07:11 |
Roman Podoliaka |
description |
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
Note: description of the original error in nova-conductor has been put below.
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
|
2015-06-10 13:07:15 |
Roman Podoliaka |
mos: status |
New |
Confirmed |
|
2015-06-10 13:17:05 |
Roman Podoliaka |
description |
Note: description of the original error in nova-conductor has been put below.
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
Reply queues created by oslo.messaging are not durable (i.e. they are gone after restart of the last RabbitMQ in the cluster). The problem is that after successful failover of RabbitMQ OpenStack services will correctly reconnect, but RPC calls will be broken until we restart the affected service: a reply queue is not recreated, which means no reply can be received for a given call, and the call will eventually fail with TimeoutError.
As it can be seen in the output of commands below, this particular reply queue of nova-conductor first migrated from RabbitMQ node to another, then saw death of another mirror, and after RabbitMQ server on node-16 was restarted the queue was gone, still nova-conductor RPC client tried to consume messages from it.
rabbitmqctl list_queues: http://xsnippet.org/360736/raw/
root@node-16:~# grep reply_f7cac1a2428d414bb8b9e0a61291a468 -P3 /var/log/rabbitmq/rabbit@node-16.log: http://xsnippet.org/360737/raw/
This wouldn't be a problem, if a new reply queue was created for new RPC calls, but currently this makes RPC client unusable unless we restart the whole process.
Note: description of the original error in nova-conductor has been put below.
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
|
2015-06-10 13:17:37 |
Roman Podoliaka |
description |
Reply queues created by oslo.messaging are not durable (i.e. they are gone after restart of the last RabbitMQ in the cluster). The problem is that after successful failover of RabbitMQ OpenStack services will correctly reconnect, but RPC calls will be broken until we restart the affected service: a reply queue is not recreated, which means no reply can be received for a given call, and the call will eventually fail with TimeoutError.
As it can be seen in the output of commands below, this particular reply queue of nova-conductor first migrated from RabbitMQ node to another, then saw death of another mirror, and after RabbitMQ server on node-16 was restarted the queue was gone, still nova-conductor RPC client tried to consume messages from it.
rabbitmqctl list_queues: http://xsnippet.org/360736/raw/
root@node-16:~# grep reply_f7cac1a2428d414bb8b9e0a61291a468 -P3 /var/log/rabbitmq/rabbit@node-16.log: http://xsnippet.org/360737/raw/
This wouldn't be a problem, if a new reply queue was created for new RPC calls, but currently this makes RPC client unusable unless we restart the whole process.
Note: description of the original error in nova-conductor has been put below.
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
Reply queues created by oslo.messaging are not durable (i.e. they are gone after restart of the last RabbitMQ in the cluster). The problem is that after successful failover of RabbitMQ OpenStack services will correctly reconnect, but RPC calls will be broken until we restart the affected service: a reply queue is not recreated, which means no reply can be received for a given call, and the call will eventually fail with TimeoutError.
As it can be seen in the output of commands below, this particular reply queue of nova-conductor first migrated from one RabbitMQ node to another, then saw death of another mirror, and after RabbitMQ server on node-16 was restarted the queue was gone, still nova-conductor RPC client tried to consume messages from it.
rabbitmqctl list_queues: http://xsnippet.org/360736/raw/
root@node-16:~# grep reply_f7cac1a2428d414bb8b9e0a61291a468 -P3 /var/log/rabbitmq/rabbit@node-16.log: http://xsnippet.org/360737/raw/
This wouldn't be a problem, if a new reply queue was created for new RPC calls, but currently this makes RPC client unusable unless we restart the whole process.
Note: description of the original error in nova-conductor has been put below.
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
|
2015-06-11 12:43:02 |
Eugene Bogdanov |
tags |
baremetal ha |
6.1rc2 baremetal ha |
|
2015-06-15 08:42:38 |
Bogdan Dobrelya |
summary |
RPC clients do not recreate a reply queue after restart of the last RabbitMQ server in the cluster |
RPC clients cannot find a reply queue after restart of the last RabbitMQ server in the cluster |
|
2015-06-16 13:11:23 |
Eugene Bogdanov |
tags |
6.1rc2 baremetal ha |
baremetal ha |
|
2015-06-16 16:03:31 |
Roman Podoliaka |
mos: assignee |
MOS Oslo (mos-oslo) |
Victor Sergeyev (vsergeyev) |
|
2015-06-16 16:09:48 |
Roman Podoliaka |
description |
Reply queues created by oslo.messaging are not durable (i.e. they are gone after restart of the last RabbitMQ in the cluster). The problem is that after successful failover of RabbitMQ OpenStack services will correctly reconnect, but RPC calls will be broken until we restart the affected service: a reply queue is not recreated, which means no reply can be received for a given call, and the call will eventually fail with TimeoutError.
As it can be seen in the output of commands below, this particular reply queue of nova-conductor first migrated from one RabbitMQ node to another, then saw death of another mirror, and after RabbitMQ server on node-16 was restarted the queue was gone, still nova-conductor RPC client tried to consume messages from it.
rabbitmqctl list_queues: http://xsnippet.org/360736/raw/
root@node-16:~# grep reply_f7cac1a2428d414bb8b9e0a61291a468 -P3 /var/log/rabbitmq/rabbit@node-16.log: http://xsnippet.org/360737/raw/
This wouldn't be a problem, if a new reply queue was created for new RPC calls, but currently this makes RPC client unusable unless we restart the whole process.
Note: description of the original error in nova-conductor has been put below.
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
Reply queues created by oslo.messaging are not durable (i.e. they are gone after restart of the last RabbitMQ in the cluster). The problem is that after successful failover of RabbitMQ OpenStack services will correctly reconnect, but RPC calls will be broken until we restart the affected service: a reply queue is not recreated, which means no reply can be received for a given call, and the call will eventually fail with TimeoutError.
As it can be seen in the output of commands below, this particular reply queue of nova-conductor first migrated from one RabbitMQ node to another, then saw death of another mirror, and after RabbitMQ server on node-16 was restarted the queue was gone, still nova-conductor RPC client tried to consume messages from it.
rabbitmqctl list_queues: http://xsnippet.org/360736/raw/
root@node-16:~# grep reply_f7cac1a2428d414bb8b9e0a61291a468 -P3 /var/log/rabbitmq/rabbit@node-16.log: http://xsnippet.org/360737/raw/
This wouldn't be a problem, if a new reply queue was created for new RPC calls, but currently this makes RPC client unusable unless we restart the whole process.
Note: description of the original error in nova-conductor has been put below.
==========================================================================
User impact:
Sometimes after successful failover of RabbitMQ server, OpenStack services may start to fail RPC calls with the following messages in logs: "Queue not found:Basic.consume: (404) NOT_FOUND - no queue 'reply_f7cac1a2428d414bb8b9e0a612". This happens when one or more controllers are brought down, but not frequently. The workaround is to restart the affected service. After restart, the service will immediately become operational.
The plan is to put the fix for this into -updates.
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
|
2015-06-16 16:21:24 |
Roman Podoliaka |
description |
Reply queues created by oslo.messaging are not durable (i.e. they are gone after restart of the last RabbitMQ in the cluster). The problem is that after successful failover of RabbitMQ OpenStack services will correctly reconnect, but RPC calls will be broken until we restart the affected service: a reply queue is not recreated, which means no reply can be received for a given call, and the call will eventually fail with TimeoutError.
As it can be seen in the output of commands below, this particular reply queue of nova-conductor first migrated from one RabbitMQ node to another, then saw death of another mirror, and after RabbitMQ server on node-16 was restarted the queue was gone, still nova-conductor RPC client tried to consume messages from it.
rabbitmqctl list_queues: http://xsnippet.org/360736/raw/
root@node-16:~# grep reply_f7cac1a2428d414bb8b9e0a61291a468 -P3 /var/log/rabbitmq/rabbit@node-16.log: http://xsnippet.org/360737/raw/
This wouldn't be a problem, if a new reply queue was created for new RPC calls, but currently this makes RPC client unusable unless we restart the whole process.
Note: description of the original error in nova-conductor has been put below.
==========================================================================
User impact:
Sometimes after successful failover of RabbitMQ server, OpenStack services may start to fail RPC calls with the following messages in logs: "Queue not found:Basic.consume: (404) NOT_FOUND - no queue 'reply_f7cac1a2428d414bb8b9e0a612". This happens when one or more controllers are brought down, but not frequently. The workaround is to restart the affected service. After restart, the service will immediately become operational.
The plan is to put the fix for this into -updates.
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
Reply queues created by oslo.messaging are not durable (i.e. they are gone after restart of the last RabbitMQ in the cluster). The problem is that after successful failover of RabbitMQ OpenStack services will correctly reconnect, but RPC calls will be broken until we restart the affected service: a reply queue is not recreated, which means no reply can be received for a given call, and the call will eventually fail with TimeoutError.
As it can be seen in the output of commands below, this particular reply queue of nova-conductor first migrated from one RabbitMQ node to another, then saw death of another mirror, and after RabbitMQ server on node-16 was restarted the queue was gone, still nova-conductor RPC client tried to consume messages from it.
rabbitmqctl list_queues: http://xsnippet.org/360736/raw/
root@node-16:~# grep reply_f7cac1a2428d414bb8b9e0a61291a468 -P3 /var/log/rabbitmq/rabbit@node-16.log: http://xsnippet.org/360737/raw/
This wouldn't be a problem, if a new reply queue was created for new RPC calls, but currently this makes RPC client unusable unless we restart the whole process.
Note: description of the original error in nova-conductor has been put below.
==========================================================================
User impact:
Sometimes after successful failover of RabbitMQ server, OpenStack services may start to fail RPC calls with the following messages in logs: "Queue not found:Basic.consume: (404) NOT_FOUND - no queue 'reply_f7cac1a2428d414bb8b9e0a612". This happens when one or more controllers are brought down, but not frequently. The workaround is to restart the affected service. After restart, the service will immediately become operational.
The plan is to put the fix for this into -updates.
Possible workarounds:
1) restart of affected services
2) make sure reply queues are durable and thus survive a RabbitMQ restart (so that even if oslo.messaging fails to redeclare the queues explicitly, they are persisted in RabbitMQ itself). <-- the problem with that is that we are not fixing the root cause of the issue
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
|
2015-06-16 18:17:58 |
Dmitry Mescheryakov |
description |
Reply queues created by oslo.messaging are not durable (i.e. they are gone after restart of the last RabbitMQ in the cluster). The problem is that after successful failover of RabbitMQ OpenStack services will correctly reconnect, but RPC calls will be broken until we restart the affected service: a reply queue is not recreated, which means no reply can be received for a given call, and the call will eventually fail with TimeoutError.
As it can be seen in the output of commands below, this particular reply queue of nova-conductor first migrated from one RabbitMQ node to another, then saw death of another mirror, and after RabbitMQ server on node-16 was restarted the queue was gone, still nova-conductor RPC client tried to consume messages from it.
rabbitmqctl list_queues: http://xsnippet.org/360736/raw/
root@node-16:~# grep reply_f7cac1a2428d414bb8b9e0a61291a468 -P3 /var/log/rabbitmq/rabbit@node-16.log: http://xsnippet.org/360737/raw/
This wouldn't be a problem, if a new reply queue was created for new RPC calls, but currently this makes RPC client unusable unless we restart the whole process.
Note: description of the original error in nova-conductor has been put below.
==========================================================================
User impact:
Sometimes after successful failover of RabbitMQ server, OpenStack services may start to fail RPC calls with the following messages in logs: "Queue not found:Basic.consume: (404) NOT_FOUND - no queue 'reply_f7cac1a2428d414bb8b9e0a612". This happens when one or more controllers are brought down, but not frequently. The workaround is to restart the affected service. After restart, the service will immediately become operational.
The plan is to put the fix for this into -updates.
Possible workarounds:
1) restart of affected services
2) make sure reply queues are durable and thus survive a RabbitMQ restart (so that even if oslo.messaging fails to redeclare the queues explicitly, they are persisted in RabbitMQ itself). <-- the problem with that is that we are not fixing the root cause of the issue
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
Steps to reproduce:
1. Deploy MOS environment in HA mode with several controllers
2. Shut down one of the controllers, either gracefully or not
3. Wait for MySQL, RabbitMQ and OpenStack to failover (several minutes)
4. Try to use OpenStack API which invokes internal messaging via Rabbit MQ. For instance, you can view console log of an instance, create instances, networks, volumes, etc.
Some requests sent in step #4 might fail with timeout. In the logs of the affected service the following message could be seen:
"Queue not found:Basic.consume: (404) NOT_FOUND - no queue 'reply_f7cac1a2428d414bb8b9e0a612"
Conditions for reproduction:
The issue occurs rather infrequently, though we don't have exact date. So far we have only 4 reproductions reported during last 2-3 weeks.
User impact:
While the issue exists, some requests to OpenStack might fail (those, which are processed on the affected controller/service). The issue does not disappear with time without manual fix (see workaround below).
Workaround:
The workaround is to restart the affected service. After restart, the service will immediately become operational.
Current plan:
We are planning to fix the issue in updates for 6.1. Right now we are reproducing it with additional logging enabled to understand the root cause.
Detailed analisys by Roman Podoliaka
==========================================================================
Reply queues created by oslo.messaging are not durable (i.e. they are gone after restart of the last RabbitMQ in the cluster). The problem is that after successful failover of RabbitMQ OpenStack services will correctly reconnect, but RPC calls will be broken until we restart the affected service: a reply queue is not recreated, which means no reply can be received for a given call, and the call will eventually fail with TimeoutError.
As it can be seen in the output of commands below, this particular reply queue of nova-conductor first migrated from one RabbitMQ node to another, then saw death of another mirror, and after RabbitMQ server on node-16 was restarted the queue was gone, still nova-conductor RPC client tried to consume messages from it.
rabbitmqctl list_queues: http://xsnippet.org/360736/raw/
root@node-16:~# grep reply_f7cac1a2428d414bb8b9e0a61291a468 -P3 /var/log/rabbitmq/rabbit@node-16.log: http://xsnippet.org/360737/raw/
This wouldn't be a problem, if a new reply queue was created for new RPC calls, but currently this makes RPC client unusable unless we restart the whole process.
Note: description of the original error in nova-conductor has been put below.
Initial description by Artem Panchenko
==========================================================================
Fuel version info (6.1 build #521 RC1): http://paste.openstack.org/show/277715/
After shutting down of primary controller OSTF tests which create Nova instances fail, because all new booted instances have ERROR state:
http://paste.openstack.org/show/281028/
Here is a part of nova-conductor.log (node-16):
http://paste.openstack.org/show/281014/
RabbitMQ cluster status looks good:
[root@fuel-lab-cz5558 ~]# runc 2 rabbitmqctl cluster_status
DEPRECATION WARNING: /etc/fuel/client/config.yaml exists and will be used as the source for settings. This behavior is deprecated. Please specify the path to your custom settings file in the FUELCLIENT_CUSTOM_SETTINGS environment variable.
node-16.mirantis.com
Cluster status of node 'rabbit@node-16' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-7','rabbit@node-16']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
node-7.mirantis.com
Cluster status of node 'rabbit@node-7' ...
[{nodes,[{disc,['rabbit@node-16','rabbit@node-7']}]},
{running_nodes,['rabbit@node-16','rabbit@node-7']},
{cluster_name,<<"rabbit@node-5.mirantis.com">>},
{partitions,[]}]
...done.
Here is AMQP queues info:
http://paste.openstack.org/show/281029/
Steps to reproduce:
1. Create environment: Ubuntu, NeutronGRE, Ceph, Sahara, Ceilometer
2. Add 1 controller, 2 controller+ceph, 1 compute and 3 mongo nodes
3. Deploy changes.
4. Run OSTF
5. Shutdown primary controller (gracefully using `poweroff` command)
6. Run OSTF
Expected result:
- all tests passed except 'Check that required services are running'
Actual:
- all tests which create Nova instances fail
Also, I didn't find why, but all API requests to Nova take a long time, for example `nova list` simple command execution takes 17 seconds:
http://paste.openstack.org/show/281031/
Diagnostic snapshot (environment ID - 2, nodes: 5,16,7,6,11,13,14): https://drive.google.com/file/d/0BzaZINLQ8-xkNDFrX2RKRS1GOWs/view?usp=sharing |
|
2015-06-17 12:08:07 |
Vitaly Sedelnik |
mos: milestone |
6.1 |
6.1-updates |
|
2015-06-18 16:07:59 |
Viktor Serhieiev |
mos: assignee |
Victor Sergeyev (vsergeyev) |
Alex Khivin (akhivin) |
|
2015-06-19 12:13:17 |
Alexey Khivin |
mos: status |
Confirmed |
In Progress |
|
2015-06-23 14:09:54 |
Alexey Khivin |
mos: status |
In Progress |
Fix Committed |
|
2015-06-24 22:41:08 |
Eugene Nikanorov |
tags |
baremetal ha |
baremetal customer-found ha |
|
2015-06-24 23:00:20 |
Alexander Nevenchannyy |
nominated for series |
|
mos/6.0-updates |
|
2015-06-24 23:00:20 |
Alexander Nevenchannyy |
bug task added |
|
mos/6.0-updates |
|
2015-06-24 23:00:31 |
Alexander Nevenchannyy |
mos/6.0-updates: status |
New |
In Progress |
|
2015-06-24 23:00:34 |
Alexander Nevenchannyy |
mos/6.0-updates: importance |
Undecided |
High |
|
2015-06-24 23:00:37 |
Alexander Nevenchannyy |
mos/6.0-updates: assignee |
|
Alexander Nevenchannyy (anevenchannyy) |
|
2015-06-24 23:00:44 |
Alexander Nevenchannyy |
mos/6.0-updates: milestone |
|
6.0-mu-4 |
|
2015-06-25 12:34:21 |
Vitaly Sedelnik |
tags |
baremetal customer-found ha |
6.1-mu-1 baremetal customer-found ha |
|
2015-06-25 19:47:16 |
Vitaly Sedelnik |
mos/6.0-updates: status |
In Progress |
Fix Committed |
|
2015-06-26 17:31:33 |
Vitaly Sedelnik |
mos/6.0-updates: status |
Fix Committed |
Fix Released |
|
2015-07-06 12:13:02 |
Vitaly Sedelnik |
mos: milestone |
6.1-updates |
6.1-mu-1 |
|
2015-07-13 05:51:32 |
Wang Yanbin |
bug |
|
|
added subscriber Wang Yanbin |
2015-07-14 15:27:38 |
Alexey Khivin |
nominated for series |
|
mos/5.1-updates |
|
2015-07-14 15:27:38 |
Alexey Khivin |
bug task added |
|
mos/5.1-updates |
|
2015-07-14 15:27:38 |
Alexey Khivin |
nominated for series |
|
mos/5.1.1-updates |
|
2015-07-14 15:27:38 |
Alexey Khivin |
bug task added |
|
mos/5.1.1-updates |
|
2015-07-14 15:27:49 |
Alexey Khivin |
mos/5.1-updates: assignee |
|
Alex Khivin (akhivin) |
|
2015-07-14 15:27:50 |
Alexey Khivin |
mos/5.1.1-updates: assignee |
|
Alex Khivin (akhivin) |
|
2015-07-14 15:27:54 |
Alexey Khivin |
mos/5.1-updates: importance |
Undecided |
High |
|
2015-07-14 15:27:55 |
Alexey Khivin |
mos/5.1.1-updates: importance |
Undecided |
High |
|
2015-07-14 15:28:00 |
Alexey Khivin |
mos/5.1-updates: status |
New |
In Progress |
|
2015-07-14 15:28:03 |
Alexey Khivin |
mos/5.1.1-updates: status |
New |
In Progress |
|
2015-07-14 15:28:17 |
Alexey Khivin |
mos/5.1-updates: milestone |
|
5.1-mu-1 |
|
2015-07-14 15:28:20 |
Alexey Khivin |
mos/5.1.1-updates: milestone |
|
5.1.1-mu-1 |
|
2015-07-15 11:38:25 |
Alexey Khivin |
mos/5.1-updates: status |
In Progress |
Fix Committed |
|
2015-07-15 11:38:27 |
Alexey Khivin |
mos/5.1.1-updates: status |
In Progress |
Fix Committed |
|
2015-07-27 19:13:42 |
Vitaly Sedelnik |
mos/5.1-updates: status |
Fix Committed |
Fix Released |
|
2015-07-27 19:13:44 |
Vitaly Sedelnik |
mos/5.1.1-updates: status |
Fix Committed |
Fix Released |
|
2015-07-31 11:05:40 |
Dmitry Mescheryakov |
nominated for series |
|
mos/7.0.x |
|
2015-07-31 11:05:40 |
Dmitry Mescheryakov |
bug task added |
|
mos/7.0.x |
|
2015-07-31 11:05:54 |
Dmitry Mescheryakov |
nominated for series |
|
mos/6.1.x |
|
2015-07-31 11:05:54 |
Dmitry Mescheryakov |
bug task added |
|
mos/6.1.x |
|
2015-07-31 11:06:01 |
Dmitry Mescheryakov |
mos/6.1.x: status |
New |
Fix Committed |
|
2015-07-31 11:06:03 |
Dmitry Mescheryakov |
mos/6.1.x: importance |
Undecided |
High |
|
2015-07-31 11:06:12 |
Dmitry Mescheryakov |
mos/6.1.x: assignee |
|
Alexey Khivin (akhivin) |
|
2015-07-31 11:06:27 |
Dmitry Mescheryakov |
mos/6.1.x: milestone |
|
6.1-mu-2 |
|
2015-07-31 11:14:55 |
Dmitry Mescheryakov |
mos/7.0.x: status |
Fix Committed |
Confirmed |
|
2015-07-31 11:14:57 |
Dmitry Mescheryakov |
mos/7.0.x: assignee |
Alexey Khivin (akhivin) |
Dmitry Mescheryakov (dmitrymex) |
|
2015-07-31 11:15:02 |
Dmitry Mescheryakov |
mos/7.0.x: milestone |
6.1-mu-1 |
7.0 |
|
2015-07-31 11:16:05 |
Vitaly Sedelnik |
mos/6.1.x: milestone |
6.1-mu-2 |
6.1-mu-1 |
|
2015-08-14 11:27:26 |
Dmitry Mescheryakov |
mos/7.0.x: status |
Confirmed |
Fix Committed |
|
2015-08-14 11:27:48 |
Dmitry Mescheryakov |
nominated for series |
|
mos/8.0 |
|
2015-08-14 11:27:48 |
Dmitry Mescheryakov |
bug task added |
|
mos/8.0 |
|
2015-08-14 11:27:57 |
Dmitry Mescheryakov |
mos/8.0: assignee |
|
MOS Oslo (mos-oslo) |
|
2015-08-14 11:27:58 |
Dmitry Mescheryakov |
mos/8.0: importance |
Undecided |
High |
|
2015-08-14 11:28:04 |
Dmitry Mescheryakov |
mos/8.0: status |
New |
Confirmed |
|
2015-08-14 11:28:07 |
Dmitry Mescheryakov |
mos/8.0: milestone |
|
8.0 |
|
2015-09-08 13:00:04 |
Alexander Makarov |
mos/7.0.x: status |
Fix Committed |
Fix Released |
|
2015-09-17 00:18:25 |
Roman Rufanov |
tags |
6.1-mu-1 baremetal customer-found ha |
6.1-mu-1 baremetal customer-found ha support |
|
2015-09-26 09:49:48 |
Vitaly Sedelnik |
nominated for series |
|
mos/5.1.x |
|
2015-09-26 09:49:48 |
Vitaly Sedelnik |
bug task added |
|
mos/5.1.x |
|
2015-09-26 09:49:54 |
Vitaly Sedelnik |
mos/5.1.x: status |
New |
Fix Released |
|
2015-09-26 09:49:57 |
Vitaly Sedelnik |
mos/5.1.x: importance |
Undecided |
High |
|
2015-09-26 09:50:06 |
Vitaly Sedelnik |
mos/5.1.x: assignee |
|
Alexey Khivin (akhivin) |
|
2015-09-26 09:50:13 |
Vitaly Sedelnik |
mos/5.1.x: milestone |
|
5.1.1-mu-1 |
|
2015-09-26 13:23:47 |
Vitaly Sedelnik |
nominated for series |
|
mos/6.0.x |
|
2015-09-26 13:23:47 |
Vitaly Sedelnik |
bug task added |
|
mos/6.0.x |
|
2015-09-26 13:23:54 |
Vitaly Sedelnik |
mos/6.0.x: status |
New |
Fix Released |
|
2015-09-26 13:23:57 |
Vitaly Sedelnik |
mos/6.0.x: importance |
Undecided |
High |
|
2015-09-26 13:24:05 |
Vitaly Sedelnik |
mos/6.0.x: assignee |
|
Alexander Nevenchannyy (anevenchannyy) |
|
2015-09-26 13:24:08 |
Vitaly Sedelnik |
mos/6.0.x: milestone |
|
6.0-mu-4 |
|
2015-11-27 16:37:20 |
Dmitry Mescheryakov |
mos/8.0.x: status |
Confirmed |
Fix Committed |
|
2016-02-16 10:54:55 |
Maksym Strukov |
tags |
6.1-mu-1 baremetal customer-found ha support |
6.1-mu-1 baremetal customer-found ha on-verification support |
|
2016-02-16 14:45:45 |
Maksym Strukov |
mos/8.0.x: status |
Fix Committed |
Fix Released |
|
2016-02-16 14:45:54 |
Maksym Strukov |
tags |
6.1-mu-1 baremetal customer-found ha on-verification support |
6.1-mu-1 baremetal customer-found ha support |
|
2016-04-11 08:59:54 |
kejunyang |
mos/6.1.x: status |
Fix Committed |
Incomplete |
|
2016-04-11 09:08:55 |
Dmitry Mescheryakov |
mos/6.1.x: status |
Incomplete |
Fix Committed |
|
2016-06-16 19:43:23 |
Vitaly Sedelnik |
mos/6.1.x: status |
Fix Committed |
Fix Released |
|