2016-08-23 16:19:24 |
Javier Diaz Jr |
bug |
|
|
added bug |
2016-08-23 16:29:20 |
Javier Diaz Jr |
description |
Detailed bug description:
Basically, a customer requires ceilometer to be disabled. We have done that successfully, however, as you may know the rest of the services will continue to publish messages to the info queue for ceilometer to consume. Without ceilometer that queue will grow and grow. We have since set `notification_driver=` to blank on all services and restart all APIs/neutron-server/and nova-compute. The info queue is a lot smaller now after purging the queue, however, the queue still grows when we delete and create instances. It appears to be neutron, but I cant for the life of me figure out exactly how to stop it.
Here is the message payload:
{"_context_roles": ["Member"], "_context_request_id": "req-95b0ebc6-2a94-4b11-b763-8b790ae077c5", "event_type": "port.delete.start", "timestamp": "2016-08-22 18:39:33.145863", "_context_tenant_id": "2297e1dd886e4e4e973985299bdce5e9", "_unique_id": "afca5549c40a42baa0f3a9b5fb1d96df", "_context_tenant_name": "S1 Win2k8 Docvert - Development", "_context_user": "3d8a88bd06444dcb87a44907b1ffe604", "_context_user_id": "3d8a88bd06444dcb87a44907b1ffe604", "payload": {"port_id": "1953f662-9ccf-42e7-aec1-9a4f856ef5f7"}, "_context_project_name": "S1 Win2k8 Docvert - Development", "_context_read_deleted": "no", "_context_tenant": "2297e1dd886e4e4e973985299bdce5e9", "priority": "INFO", "_context_is_admin": false, "_context_project_id": "2297e1dd886e4e4e973985299bdce5e9", "_context_timestamp": "2016-08-22 18:39:33.142453", "_context_user_name": "0171070", "publisher_id": "network.node-10.int.thomsonreuters.com", "message_id": "ad0329e0-eb56-4b36-b0e1-480a90ce403a"}
[root@CONTROLLER-1 ~]# find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;
/etc/cinder/cinder-amers1a.conf:notification_driver =
/etc/cinder/cinder.conf:notification_driver =
/etc/cinder/cinder-amers1b.conf:notification_driver =
/etc/nova/nova.conf:notification_driver=
/etc/glance/glance-api.conf:notification_driver =
/etc/heat/heat.conf:notification_driver =
/etc/neutron/neutron.conf:notification_driver =
[root@COMPUTE-1 ~]# find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;
/etc/nova/nova.conf:notification_driver=
/etc/nova/nova.conf:notification_driver=
/etc/nova/nova.conf:notification_driver=
Steps to reproduce:
1. Stop all ceilometer related services on controllers and compute nodes.
2. Find all conf files with notification_driver variable (find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;)
3. Set notification_driver variable to blank or noop.
4. Restart all corresponding APIs, neutron-server, nova-compute.
5. Purge notifications.info queue to clear queue.
6. Create and delete instance.
7. notifiactions.info queue grows again.
Expected results:
Queue should no longer be published to.
Actual result:
Neutron appears to be publishing messages from port create and port delete (see above payload)
Reproducibility:
100%
Workaround:
Cron job to purge queue manually.
Impact:
If no cron job is configured the queue will grow to large and rabbit may experience issues.
Description of the environment:
- Operation system: Ubuntu 14.04
- Versions of components: MOS 7.0 MU4 |
Detailed bug description:
Basically, a customer requires ceilometer to be disabled. We have done that successfully, however, as you may know the rest of the services will continue to publish messages to the info queue for ceilometer to consume. Without ceilometer that queue will grow and grow. We have since set `notification_driver=` to blank on all services and restart all APIs/neutron-server/and nova-compute. The info queue is a lot smaller now after purging the queue, however, the queue still grows when we delete and create instances. It appears to be neutron, but I cant for the life of me figure out exactly how to stop it.
Here is the message payload:
{"_context_roles": ["Member"], "_context_request_id": "req-95b0ebc6-2a94-4b11-b763-8b790ae077c5", "event_type": "port.delete.start", "timestamp": "2016-08-22 18:39:33.145863", "_context_tenant_id": "2297e1dd886e4e4e973985299bdce5e9", "_unique_id": "afca5549c40a42baa0f3a9b5fb1d96df", "_context_tenant_name": "S1 Win2k8 Docvert - Development", "_context_user": "3d8a88bd06444dcb87a44907b1ffe604", "_context_user_id": "3d8a88bd06444dcb87a44907b1ffe604", "payload": {"port_id": "1953f662-9ccf-42e7-aec1-9a4f856ef5f7"}, "_context_project_name": "S1 Win2k8 Docvert - Development", "_context_read_deleted": "no", "_context_tenant": "2297e1dd886e4e4e973985299bdce5e9", "priority": "INFO", "_context_is_admin": false, "_context_project_id": "2297e1dd886e4e4e973985299bdce5e9", "_context_timestamp": "2016-08-22 18:39:33.142453", "_context_user_name": "0171070", "publisher_id": "network.node-10.int.thomsonreuters.com", "message_id": "ad0329e0-eb56-4b36-b0e1-480a90ce403a"}
[root@CONTROLLER-1 ~]# find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;
/etc/cinder/cinder-amers1a.conf:notification_driver =
/etc/cinder/cinder.conf:notification_driver =
/etc/cinder/cinder-amers1b.conf:notification_driver =
/etc/nova/nova.conf:notification_driver=
/etc/glance/glance-api.conf:notification_driver =
/etc/heat/heat.conf:notification_driver =
/etc/neutron/neutron.conf:notification_driver =
[root@COMPUTE-1 ~]# find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;
/etc/nova/nova.conf:notification_driver=
/etc/nova/nova.conf:notification_driver=
/etc/nova/nova.conf:notification_driver=
Steps to reproduce:
1. Stop all ceilometer related services on controllers and compute nodes.
2. Find all conf files with notification_driver variable (find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;)
3. Set notification_driver variable to blank or noop.
4. Restart all corresponding APIs, neutron-server, nova-compute.
5. Purge notifications.info queue to clear queue.
6. Create and delete instance.
7. notifiactions.info queue grows again.
Expected results:
Queue should no longer be published to.
Actual result:
Neutron appears to be publishing messages from port create and port delete (see above payload)
Reproducibility:
100%
Workaround:
Cron job to purge queue manually.
Impact:
If no cron job is configured the queue will grow to large and rabbit may experience issues.
Description of the environment:
- Operation system: Ubuntu 14.04
- Versions of components: MOS 7.0 MU4, MOS 5.1 ( I assume other MOS versions are affected too) |
|
2016-08-23 16:30:54 |
Javier Diaz Jr |
description |
Detailed bug description:
Basically, a customer requires ceilometer to be disabled. We have done that successfully, however, as you may know the rest of the services will continue to publish messages to the info queue for ceilometer to consume. Without ceilometer that queue will grow and grow. We have since set `notification_driver=` to blank on all services and restart all APIs/neutron-server/and nova-compute. The info queue is a lot smaller now after purging the queue, however, the queue still grows when we delete and create instances. It appears to be neutron, but I cant for the life of me figure out exactly how to stop it.
Here is the message payload:
{"_context_roles": ["Member"], "_context_request_id": "req-95b0ebc6-2a94-4b11-b763-8b790ae077c5", "event_type": "port.delete.start", "timestamp": "2016-08-22 18:39:33.145863", "_context_tenant_id": "2297e1dd886e4e4e973985299bdce5e9", "_unique_id": "afca5549c40a42baa0f3a9b5fb1d96df", "_context_tenant_name": "S1 Win2k8 Docvert - Development", "_context_user": "3d8a88bd06444dcb87a44907b1ffe604", "_context_user_id": "3d8a88bd06444dcb87a44907b1ffe604", "payload": {"port_id": "1953f662-9ccf-42e7-aec1-9a4f856ef5f7"}, "_context_project_name": "S1 Win2k8 Docvert - Development", "_context_read_deleted": "no", "_context_tenant": "2297e1dd886e4e4e973985299bdce5e9", "priority": "INFO", "_context_is_admin": false, "_context_project_id": "2297e1dd886e4e4e973985299bdce5e9", "_context_timestamp": "2016-08-22 18:39:33.142453", "_context_user_name": "0171070", "publisher_id": "network.node-10.int.thomsonreuters.com", "message_id": "ad0329e0-eb56-4b36-b0e1-480a90ce403a"}
[root@CONTROLLER-1 ~]# find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;
/etc/cinder/cinder-amers1a.conf:notification_driver =
/etc/cinder/cinder.conf:notification_driver =
/etc/cinder/cinder-amers1b.conf:notification_driver =
/etc/nova/nova.conf:notification_driver=
/etc/glance/glance-api.conf:notification_driver =
/etc/heat/heat.conf:notification_driver =
/etc/neutron/neutron.conf:notification_driver =
[root@COMPUTE-1 ~]# find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;
/etc/nova/nova.conf:notification_driver=
/etc/nova/nova.conf:notification_driver=
/etc/nova/nova.conf:notification_driver=
Steps to reproduce:
1. Stop all ceilometer related services on controllers and compute nodes.
2. Find all conf files with notification_driver variable (find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;)
3. Set notification_driver variable to blank or noop.
4. Restart all corresponding APIs, neutron-server, nova-compute.
5. Purge notifications.info queue to clear queue.
6. Create and delete instance.
7. notifiactions.info queue grows again.
Expected results:
Queue should no longer be published to.
Actual result:
Neutron appears to be publishing messages from port create and port delete (see above payload)
Reproducibility:
100%
Workaround:
Cron job to purge queue manually.
Impact:
If no cron job is configured the queue will grow to large and rabbit may experience issues.
Description of the environment:
- Operation system: Ubuntu 14.04
- Versions of components: MOS 7.0 MU4, MOS 5.1 ( I assume other MOS versions are affected too) |
Detailed bug description:
Basically, a customer requires ceilometer to be disabled. We have done that successfully, however, as you may know the rest of the services will continue to publish messages to the info queue for ceilometer to consume. Without ceilometer that queue will grow and grow. We have since set `notification_driver=` to blank on all services and restart all APIs/neutron-server/and nova-compute. The info queue is a lot smaller now after purging the queue, however, the queue still grows when we delete and create instances. It appears to be neutron, but I cant for the life of me figure out exactly how to stop it.
Here is the message payload:
{"_context_roles": ["Member"], "_context_request_id": "req-95b0ebc6-2a94-4b11-b763-8b790ae077c5", "event_type": "port.delete.start", "timestamp": "2016-08-22 18:39:33.145863", "_context_tenant_id": "2297e1dd886e4e4e973985299bdce5e9", "_unique_id": "afca5549c40a42baa0f3a9b5fb1d96df", "_context_tenant_name": "Win2k8", "_context_user": "3d8a88bd06444dcb87a44907b1ffe604", "_context_user_id": "3d8a88bd06444dcb87a44907b1ffe604", "payload": {"port_id": "1953f662-9ccf-42e7-aec1-9a4f856ef5f7"}, "_context_project_name": "Win2k8", "_context_read_deleted": "no", "_context_tenant": "2297e1dd886e4e4e973985299bdce5e9", "priority": "INFO", "_context_is_admin": false, "_context_project_id": "2297e1dd886e4e4e973985299bdce5e9", "_context_timestamp": "2016-08-22 18:39:33.142453", "_context_user_name": "0171070", "publisher_id": "controller@domain.com(edited for privacy reasons)", "message_id": "ad0329e0-eb56-4b36-b0e1-480a90ce403a"}
[root@CONTROLLER-1 ~]# find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;
/etc/cinder/cinder-amers1a.conf:notification_driver =
/etc/cinder/cinder.conf:notification_driver =
/etc/cinder/cinder-amers1b.conf:notification_driver =
/etc/nova/nova.conf:notification_driver=
/etc/glance/glance-api.conf:notification_driver =
/etc/heat/heat.conf:notification_driver =
/etc/neutron/neutron.conf:notification_driver =
[root@COMPUTE-1 ~]# find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;
/etc/nova/nova.conf:notification_driver=
/etc/nova/nova.conf:notification_driver=
/etc/nova/nova.conf:notification_driver=
Steps to reproduce:
1. Stop all ceilometer related services on controllers and compute nodes.
2. Find all conf files with notification_driver variable (find /etc/ -name \*.conf -exec grep -HiR '^notification_driver' {} \;)
3. Set notification_driver variable to blank or noop.
4. Restart all corresponding APIs, neutron-server, nova-compute.
5. Purge notifications.info queue to clear queue.
6. Create and delete instance.
7. notifiactions.info queue grows again.
Expected results:
Queue should no longer be published to.
Actual result:
Neutron appears to be publishing messages from port create and port delete (see above payload)
Reproducibility:
100%
Workaround:
Cron job to purge queue manually.
Impact:
If no cron job is configured the queue will grow to large and rabbit may experience issues.
Description of the environment:
- Operation system: Ubuntu 14.04
- Versions of components: MOS 7.0 MU4, MOS 5.1 ( I assume other MOS versions are affected too) |
|
2016-08-23 16:31:44 |
Javier Diaz Jr |
tags |
|
customer-found |
|
2016-08-23 16:32:43 |
Andrii Petrenko |
tags |
customer-found |
customer-found support |
|
2016-08-23 16:33:07 |
Andrii Petrenko |
nominated for series |
|
mos/7.0.x |
|
2016-08-23 16:33:07 |
Andrii Petrenko |
bug task added |
|
mos/7.0.x |
|
2016-08-23 17:19:00 |
Javier Diaz Jr |
mos: assignee |
|
MOS Maintenance (mos-maintenance) |
|
2016-08-23 17:19:13 |
Javier Diaz Jr |
mos/7.0.x: assignee |
|
MOS Maintenance (mos-maintenance) |
|
2016-08-24 14:43:04 |
Dmitry Mescheryakov |
nominated for series |
|
mos/8.0.x |
|
2016-08-24 14:43:04 |
Dmitry Mescheryakov |
bug task added |
|
mos/8.0.x |
|
2016-08-24 14:43:04 |
Dmitry Mescheryakov |
nominated for series |
|
mos/9.x |
|
2016-08-24 14:43:04 |
Dmitry Mescheryakov |
bug task added |
|
mos/9.x |
|
2016-08-24 14:43:18 |
Dmitry Mescheryakov |
mos/8.0.x: assignee |
|
MOS Maintenance (mos-maintenance) |
|
2016-08-24 14:43:22 |
Dmitry Mescheryakov |
mos/7.0.x: milestone |
|
7.0-updates |
|
2016-08-24 14:43:27 |
Dmitry Mescheryakov |
mos/8.0.x: milestone |
|
8.0-updates |
|
2016-08-24 14:44:23 |
Dmitry Mescheryakov |
mos/9.x: milestone |
|
9.1 |
|
2016-08-24 14:45:04 |
Dmitry Mescheryakov |
mos/9.x: assignee |
MOS Maintenance (mos-maintenance) |
Kirill Bespalov (k-besplv) |
|
2016-08-24 15:58:25 |
Dmitry Mescheryakov |
mos/9.x: assignee |
Kirill Bespalov (k-besplv) |
MOS Puppet Team (mos-puppet) |
|
2016-08-25 13:50:25 |
Ivan Berezovskiy |
mos/9.x: assignee |
MOS Puppet Team (mos-puppet) |
Ivan Berezovskiy (iberezovskiy) |
|
2016-08-25 13:50:29 |
Ivan Berezovskiy |
mos/9.x: importance |
Undecided |
Medium |
|
2016-08-26 13:03:14 |
Vitaly Sedelnik |
mos/7.0.x: status |
New |
Confirmed |
|
2016-08-26 13:03:16 |
Vitaly Sedelnik |
mos/8.0.x: status |
New |
Confirmed |
|
2016-08-26 13:03:18 |
Vitaly Sedelnik |
mos/9.x: status |
New |
Confirmed |
|
2016-08-26 13:03:21 |
Vitaly Sedelnik |
mos/7.0.x: importance |
Undecided |
Medium |
|
2016-08-26 13:03:23 |
Vitaly Sedelnik |
mos/8.0.x: importance |
Undecided |
Medium |
|
2016-08-26 13:04:16 |
Vitaly Sedelnik |
tags |
customer-found support |
area-library customer-found support |
|
2016-09-08 14:17:50 |
Rene Soto |
bug |
|
|
added subscriber Rene Soto |
2016-09-08 17:37:46 |
Ivan Berezovskiy |
mos/9.x: assignee |
Ivan Berezovskiy (iberezovskiy) |
MOS Maintenance (mos-maintenance) |
|
2016-09-15 09:04:45 |
Alexey Stupnikov |
mos/9.x: status |
Confirmed |
Fix Committed |
|
2016-09-15 09:05:15 |
Alexey Stupnikov |
mos/9.x: assignee |
MOS Maintenance (mos-maintenance) |
Ivan Berezovskiy (iberezovskiy) |
|
2016-09-15 09:05:18 |
Alexey Stupnikov |
mos/8.0.x: assignee |
MOS Maintenance (mos-maintenance) |
Alexey Stupnikov (astupnikov) |
|
2016-09-15 09:05:20 |
Alexey Stupnikov |
mos/7.0.x: assignee |
MOS Maintenance (mos-maintenance) |
Alexey Stupnikov (astupnikov) |
|
2016-09-23 07:36:06 |
TatyanaGladysheva |
tags |
area-library customer-found support |
area-library customer-found on-verification support |
|
2016-09-27 05:13:22 |
TatyanaGladysheva |
tags |
area-library customer-found on-verification support |
area-library customer-found support |
|
2016-09-30 14:11:15 |
Nastya Urlapova |
mos/9.x: status |
Fix Committed |
Fix Released |
|
2016-11-16 12:44:33 |
Alexey Stupnikov |
mos/7.0.x: status |
Confirmed |
Invalid |
|
2016-11-16 12:44:36 |
Alexey Stupnikov |
mos/8.0.x: status |
Confirmed |
Invalid |
|