Removing gateway ip for tenant network (DVR) causes traceback in neutron-openvswitch-agent

Bug #1728665 reported by James Denton
32
This bug affects 7 people
Affects Status Importance Assigned to Milestone
neutron
Fix Released
High
Brian Haley

Bug Description

Version: OpenStack Newton (OSA v14.2.11)
neutron-openvswitch-agent version 9.4.2.dev21

Issue:

Users complained that instances were unable to procure their IP via DHCP. On the controllers, numerous ports were found in BUILD state. Tracebacks similar to the following could be observed in the neutron-openvswitch-agent logs across the (3) controllers.

2017-10-26 16:24:28.458 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port e9c11103-9d10-4b27-b739-e428773d8fac updated. Details: {u'profile': {}, u'network_qos_policy_id': None, u'qos_policy_id': None, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'e57257d9-f915-4c60-ac30-76b0e2d36378', u'segmentation_id': 2123, u'device_owner': u'network:dhcp', u'physical_network': u'physnet1', u'mac_address': u'fa:16:3e:af:aa:f5', u'device': u'e9c11103-9d10-4b27-b739-e428773d8fac', u'port_security_enabled': False, u'port_id': u'e9c11103-9d10-4b27-b739-e428773d8fac', u'fixed_ips': [{u'subnet_id': u'b7196c99-0df6-4b0e-bbfa-e62da96dac86', u'ip_address': u'10.1.1.32'}], u'network_type': u'vlan'}
2017-10-26 16:24:28.458 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 48 as local vlan for net-id=e57257d9-f915-4c60-ac30-76b0e2d36378
2017-10-26 16:24:28.462 4403 INFO neutron.agent.l2.extensions.qos [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] QoS extension did have no information about the port e9c11103-9d10-4b27-b739-e428773d8fac that we were trying to reset
2017-10-26 16:24:28.462 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 610c3924-5e94-4f95-b19b-75e43c5729ff updated. Details: {u'profile': {}, u'network_qos_policy_id': None, u'qos_policy_id': None, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'f09a8be9-a7c7-4f90-8cb3-d08b61095c25', u'segmentation_id': 5, u'device_owner': u'network:router_gateway', u'physical_network': u'physnet1', u'mac_address': u'fa:16:3e:bf:39:43', u'device': u'610c3924-5e94-4f95-b19b-75e43c5729ff', u'port_security_enabled': False, u'port_id': u'610c3924-5e94-4f95-b19b-75e43c5729ff', u'fixed_ips': [{u'subnet_id': u'3ce21ed4-bb6a-4e67-b222-a055df40af08', u'ip_address': u'96.116.48.132'}], u'network_type': u'vlan'}
2017-10-26 16:24:28.463 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 43 as local vlan for net-id=f09a8be9-a7c7-4f90-8cb3-d08b61095c25
2017-10-26 16:24:28.466 4403 INFO neutron.agent.l2.extensions.qos [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] QoS extension did have no information about the port 610c3924-5e94-4f95-b19b-75e43c5729ff that we were trying to reset
2017-10-26 16:24:28.467 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 66db7e2d-bd92-48ea-85fa-5e20dfc5311c updated. Details: {u'profile': {}, u'network_qos_policy_id': None, u'qos_policy_id': None, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'fd67eae2-9db7-4f7c-a622-39be67090cb4', u'segmentation_id': 2170, u'device_owner': u'network:dhcp', u'physical_network': u'physnet1', u'mac_address': u'fa:16:3e:c9:24:8a', u'device': u'66db7e2d-bd92-48ea-85fa-5e20dfc5311c', u'port_security_enabled': False, u'port_id': u'66db7e2d-bd92-48ea-85fa-5e20dfc5311c', u'fixed_ips': [{u'subnet_id': u'47366a54-22ca-47a2-b7a0-987257fa83ea', u'ip_address': u'192.168.189.3'}], u'network_type': u'vlan'}
2017-10-26 16:24:28.467 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 54 as local vlan for net-id=fd67eae2-9db7-4f7c-a622-39be67090cb4
2017-10-26 16:24:28.470 4403 INFO neutron.agent.l2.extensions.qos [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] QoS extension did have no information about the port 66db7e2d-bd92-48ea-85fa-5e20dfc5311c that we were trying to reset
{...snip...}
2017-10-26 16:24:28.501 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port c53c48d4-77a8-4185-bc87-ff999bdfd4a1 updated. Details: {u'profile': {}, u'network_qos_policy_id': None, u'qos_policy_id': None, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'06390e9c-6aa4-427a-91dc-5cf2c62be143', u'segmentation_id': 2003, u'device_owner': u'network:router_interface_distributed', u'physical_network': u'physnet1', u'mac_address': u'fa:16:3e:38:8b:f0', u'device': u'c53c48d4-77a8-4185-bc87-ff999bdfd4a1', u'port_security_enabled': False, u'port_id': u'c53c48d4-77a8-4185-bc87-ff999bdfd4a1', u'fixed_ips': [{u'subnet_id': u'6d20ab59-a8a8-4663-b052-d78fea133c23', u'ip_address': u'192.168.100.1'}], u'network_type': u'vlan'}
2017-10-26 16:24:28.501 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 9 as local vlan for net-id=06390e9c-6aa4-427a-91dc-5cf2c62be143
2017-10-26 16:24:28.656 4403 INFO neutron.agent.l2.extensions.qos [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] QoS extension did have no information about the port c53c48d4-77a8-4185-bc87-ff999bdfd4a1 that we were trying to reset
2017-10-26 16:24:28.656 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 68e5740d-81e1-4355-bb48-867305f706bc updated. Details: {u'profile': {}, u'network_qos_policy_id': None, u'qos_policy_id': None, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'06b4c4da-9de0-447b-8892-ccc45f3393ba', u'segmentation_id': 2122, u'device_owner': u'network:router_interface_distributed', u'physical_network': u'physnet1', u'mac_address': u'fa:16:3e:2e:33:cc', u'device': u'68e5740d-81e1-4355-bb48-867305f706bc', u'port_security_enabled': False, u'port_id': u'68e5740d-81e1-4355-bb48-867305f706bc', u'fixed_ips': [{u'subnet_id': u'099a6d6e-1353-4b7a-aab3-ede9d592328d', u'ip_address': u'10.10.1.10'}], u'network_type': u'vlan'}
2017-10-26 16:24:28.657 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 12 as local vlan for net-id=06b4c4da-9de0-447b-8892-ccc45f3393ba
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager [-] ofctl_service: Exception occurred during handler processing. Backtrace from offending handler [_handle_send_msg] servicing event [SendMsgRequest] follows.
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager Traceback (most recent call last):
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/ryu/base/app_manager.py", line 290, in _event_loop
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager handler(ev)
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/ryu/app/ofctl/service.py", line 141, in _handle_send_msg
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager datapath.send_msg(msg)
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/ryu/controller/controller.py", line 334, in send_msg
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager msg.serialize()
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/ryu/ofproto/ofproto_parser.py", line 211, in serialize
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager self._serialize_body()
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/ryu/ofproto/ofproto_v1_3_parser.py", line 2654, in _serialize_body
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager match_len = self.match.serialize(self.buf, offset)
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/ryu/ofproto/ofproto_v1_3_parser.py", line 1008, in serialize
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager field_offset)
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/ryu/ofproto/oxx_fields.py", line 250, in _serialize
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager value_len = len(value)
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager TypeError: object of type 'NoneType' has no len()
2017-10-26 16:24:28.790 4403 ERROR ryu.base.app_manager
2017-10-26 16:24:37.926 4403 INFO neutron.agent.securitygroups_rpc [req-d8721ca3-32dd-401e-bbf9-51b48a068dd3 1388ac7207b9a81e1f106c2ddaae60b0a027ba25e4cc5a2e1c293962319fad62 fb84a528d1464523b9faba201a60bb1d - - -] Security group member updated [u'84f95888-61f2-4299-b7de-1b5bb8854057', u'b069c4af-aa82-410d-8105-5418dbf3fbfe', u'b0e16b93-6276-40a6-ac36-77350e89d349', u'd2a782cb-3dde-4fa4-b6ca-e76023741a46']
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] ofctl request version=0x4,msg_type=0xe,msg_len=None,xid=0x42a2e0fb,OFPFlowMod(buffer_id=4294967295,command=0,cookie=10758795138239065597L,cookie_mask=0,flags=0,hard_timeout=0,idle_timeout=0,instructions=[],match=OFPMatch(oxm_fields={'arp_tpa': None, 'eth_type': 2054, 'vlan_vid': 4108}),out_group=0,out_port=0,priority=3,table_id=1) timed out
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Error while processing VIF ports
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last):
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 2053, in rpc_loop
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, ovs_restarted)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/osprofiler/profiler.py", line 154, in wrapper
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return f(*args, **kwargs)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1636, in process_network_ports
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent devices_added_updated, ovs_restarted))
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/osprofiler/profiler.py", line 154, in wrapper
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return f(*args, **kwargs)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1538, in treat_devices_added_or_updated
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ovs_restarted)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/osprofiler/profiler.py", line 154, in wrapper
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return f(*args, **kwargs)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1423, in treat_vif_port
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent fixed_ips, device_owner, ovs_restarted)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/osprofiler/profiler.py", line 154, in wrapper
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return f(*args, **kwargs)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 799, in port_bound
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent device_owner)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/osprofiler/profiler.py", line 154, in wrapper
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return f(*args, **kwargs)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py", line 572, in bind_port_to_dvr
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent device_owner)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py", line 430, in _bind_distributed_router_interface_port
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent vlan_tag=lvm.vlan, gateway_ip=subnet_info['gateway_ip'])
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_dvr_process.py", line 42, in install_dvr_process_ipv4
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent match=match)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 175, in install_drop
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent instructions=[], match=match, **match_kwargs)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 188, in install_instructions
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent self._send_msg(msg)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-14.2.4/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 89, in _send_msg
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent raise RuntimeError(m)
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent RuntimeError: ofctl request version=0x4,msg_type=0xe,msg_len=None,xid=0x42a2e0fb,OFPFlowMod(buffer_id=4294967295,command=0,cookie=10758795138239065597L,cookie_mask=0,flags=0,hard_timeout=0,idle_timeout=0,instructions=[],match=OFPMatch(oxm_fields={'arp_tpa': None, 'eth_type': 2054, 'vlan_vid': 4108}),out_group=0,out_port=0,priority=3,table_id=1) timed out
2017-10-26 16:24:38.791 4403 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
2017-10-26 16:24:38.851 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Agent out of sync with plugin!
2017-10-26 16:24:38.917 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 'tap3f675555-ac' has lost its vlan tag '75'!
2017-10-26 16:24:38.917 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 'tap6616ea67-af' has lost its vlan tag '73'!
2017-10-26 16:24:38.918 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 'qg-610c3924-5e' has lost its vlan tag '43'!
2017-10-26 16:24:38.918 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 'qr-68e5740d-81' has lost its vlan tag '12'!
2017-10-26 16:24:38.918 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 'qr-c53c48d4-77' has lost its vlan tag '9'!
2017-10-26 16:24:38.919 4403 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 'sg-caac51f8-11' has lost its vlan tag '47'!

It turns out that a user removed the gateway IP from their subnet with the following command:

openstack subnet set --gateway none <subnet>

The subnet is attached to a distributed router. The neutron-openvswitch-agent service was restarted at a later time across the controllers, and immediately began logging tracebacks similar to the output provided. The tracebacks occurred on every agent loop, causing many ports on these hosts to not fully processed and remain in a BUILD state. This had a negative impact on other tenants who virtual routers or dhcp servers were not available.

In the lab, setting the gateway ip with 'openstack subnet set --gateway <ip> <subnet>' and restarting the neutron-openvswitch-agent(s) appears to restore functionality.

Revision history for this message
James Denton (james-denton) wrote :
Download full text (5.2 KiB)

Additional info:

In this output, port 68e5740d-81e1-4355-bb48-867305f706bc is the port formerly associated with the gateway_ip for the respective tenant subnet. The details of which are:

root@controller01-utility-container-8ad9622f:~# neutron port-show 68e5740d-81e1-4355-bb48-867305f706bc
+-----------------------+-----------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-----------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:host_id | |
| binding:profile | {} |
| binding:vif_details | {} |
| binding:vif_type | distributed |
| binding:vnic_type | normal |
| created_at | 2017-09-25T19:28:18Z |
| description | |
| device_id | eee05405-f4c7-4a7a-9100-fc09bbfc9d82 |
| device_owner | network:router_interface_distributed |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "099a6d6e-1353-4b7a-aab3-ede9d592328d", "ip_address": "10.10.1.10"} |
| id | 68e5740d-81e1-4355-bb48-867305f706bc |
| mac_address | fa:16:3e:2e:33:cc |
| name | |
| network_id | 06b4c4da-9de0-447b-8892-ccc45f3393ba |
| port_security_enabled | False |
| project_id | 340c72665c664f52bba3ac61a81fd501 |
| qos_policy_id | |
| revision_number | 207200 |
| security_groups | |
| status | ACTIVE |
| tenant_id | 340c72665c664f52bba3ac...

Read more...

Revision history for this message
James Denton (james-denton) wrote :
Download full text (33.0 KiB)

I was able to duplicate this in neutron version 11.0.0.0rc2.dev368 (OpenStack-Ansible master) under the same conditions. The following log snippet is from neutron-openvswitch-agent.log:

2017-10-31 14:20:24.288 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-9f88e2a1-069e-4321-865f-e95c97575c45 - - - - -] ofctl request version=0x4,msg_type=0xe,msg_len=None,xid=0xfda6e61,OFPFlowMod(buffer_id=4294967295,command=0,cookie=1900617720212172640L,cookie_mask=0,flags=0,hard_timeout=0,idle_timeout=0,instructions=[],match=OFPMatch(oxm_fields={'arp_tpa': None, 'eth_type': 2054, 'vlan_vid': 4098}),out_group=0,out_port=0,priority=3,table_id=1) timed out: Timeout: 10 seconds
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-9f88e2a1-069e-4321-865f-e95c97575c45 - - - - -] Error while processing VIF ports: RuntimeError: ofctl request version=0x4,msg_type=0xe,msg_len=None,xid=0xfda6e61,OFPFlowMod(buffer_id=4294967295,command=0,cookie=1900617720212172640L,cookie_mask=0,flags=0,hard_timeout=0,idle_timeout=0,instructions=[],match=OFPMatch(oxm_fields={'arp_tpa': None, 'eth_type': 2054, 'vlan_vid': 4098}),out_group=0,out_port=0,priority=3,table_id=1) timed out
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last):
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 2063, in rpc_loop
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, ovs_restarted)
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-master/lib/python2.7/site-packages/osprofiler/profiler.py", line 157, in wrapper
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = f(*args, **kwargs)
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1646, in process_network_ports
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent devices_added_updated, ovs_restarted))
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-master/lib/python2.7/site-packages/osprofiler/profiler.py", line 157, in wrapper
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = f(*args, **kwargs)
2017-10-31 14:20:24.290 10631 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1548, in treat_devices_added_or_updated
2017-10-31 14:20:24.290 10631 ERR...

Revision history for this message
Brian Haley (brian-haley) wrote :

The last comment shows a timeout, as well as a ryu traceback. Is there a problem with this agent communicating with OVS?

tags: added: ovs
Revision history for this message
James Denton (james-denton) wrote :

No communication problem that I can see. I can consistently create this condition by setting gateway_ip=None on a tenant subnet attached to a distributed router and restarting the OVS agent. Setting a valid gateway IP and restarting the agent restores service.

tags: added: l3-dvr-backlog
Changed in neutron:
status: New → Confirmed
importance: Undecided → Medium
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/521199

Changed in neutron:
assignee: nobody → Brian Haley (brian-haley)
status: Confirmed → In Progress
Revision history for this message
Brian Haley (brian-haley) wrote :

Changed status to High since the agent isn't usable once it gets in this state.

Changed in neutron:
importance: Medium → High
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.openstack.org/521199
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=9be7b62f773d3f61da57c151bfbd5c8fe4d4e863
Submitter: Zuul
Branch: master

commit 9be7b62f773d3f61da57c151bfbd5c8fe4d4e863
Author: Brian Haley <email address hidden>
Date: Fri Nov 17 16:53:41 2017 -0500

    DVR: verify subnet has gateway_ip before installing IPv4 flow

    If a user clears the gateway_ip of a subnet and the OVS
    agent is re-started, it will throw an exception trying
    to install the DVR IPv4 flow. Do not install the flow
    in this case since it is not required.

    Change-Id: I79aba63498aa9af1156e37530627fcaec853a740
    Closes-bug: #1728665

Changed in neutron:
status: In Progress → Fix Released
Revision history for this message
Arjun Baindur (abaindur) wrote :

Is this going to be backported into pike? It affects releases all the way back to newton. How was it not disocvered until just now? Seems like a fairly common use case, as whenever we have multi NIC VMs attached to more than one network, gateway is usually disabled on those 2ndary networks.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.openstack.org/548605

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/pike)

Fix proposed to branch: stable/pike
Review: https://review.openstack.org/548606

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/ocata)

Fix proposed to branch: stable/ocata
Review: https://review.openstack.org/548607

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/queens)

Reviewed: https://review.openstack.org/548605
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=4593069cd1c3bfd157e7bf7d1fde342a04a5104b
Submitter: Zuul
Branch: stable/queens

commit 4593069cd1c3bfd157e7bf7d1fde342a04a5104b
Author: Brian Haley <email address hidden>
Date: Fri Nov 17 16:53:41 2017 -0500

    DVR: verify subnet has gateway_ip before installing IPv4 flow

    If a user clears the gateway_ip of a subnet and the OVS
    agent is re-started, it will throw an exception trying
    to install the DVR IPv4 flow. Do not install the flow
    in this case since it is not required.

    Change-Id: I79aba63498aa9af1156e37530627fcaec853a740
    Closes-bug: #1728665
    (cherry picked from commit 9be7b62f773d3f61da57c151bfbd5c8fe4d4e863)

tags: added: in-stable-queens
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/pike)

Reviewed: https://review.openstack.org/548606
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=8e0c0b8fa7e01c01d90c096b3b36c6a38b55752e
Submitter: Zuul
Branch: stable/pike

commit 8e0c0b8fa7e01c01d90c096b3b36c6a38b55752e
Author: Brian Haley <email address hidden>
Date: Fri Nov 17 16:53:41 2017 -0500

    DVR: verify subnet has gateway_ip before installing IPv4 flow

    If a user clears the gateway_ip of a subnet and the OVS
    agent is re-started, it will throw an exception trying
    to install the DVR IPv4 flow. Do not install the flow
    in this case since it is not required.

    Change-Id: I79aba63498aa9af1156e37530627fcaec853a740
    Closes-bug: #1728665
    (cherry picked from commit 9be7b62f773d3f61da57c151bfbd5c8fe4d4e863)

tags: added: in-stable-pike
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/ocata)

Reviewed: https://review.openstack.org/548607
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=f5e7420da5af69293276dc31fe28ff843a74a0b0
Submitter: Zuul
Branch: stable/ocata

commit f5e7420da5af69293276dc31fe28ff843a74a0b0
Author: Brian Haley <email address hidden>
Date: Fri Nov 17 16:53:41 2017 -0500

    DVR: verify subnet has gateway_ip before installing IPv4 flow

    If a user clears the gateway_ip of a subnet and the OVS
    agent is re-started, it will throw an exception trying
    to install the DVR IPv4 flow. Do not install the flow
    in this case since it is not required.

    Change-Id: I79aba63498aa9af1156e37530627fcaec853a740
    Closes-bug: #1728665
    (cherry picked from commit 9be7b62f773d3f61da57c151bfbd5c8fe4d4e863)

tags: added: in-stable-ocata
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 10.0.5

This issue was fixed in the openstack/neutron 10.0.5 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 11.0.3

This issue was fixed in the openstack/neutron 11.0.3 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 12.0.1

This issue was fixed in the openstack/neutron 12.0.1 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 13.0.0.0b1

This issue was fixed in the openstack/neutron 13.0.0.0b1 development milestone.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.