Activity log for bug #1464527

Date Who What changed Old value New value Message
2015-06-12 06:54:59 shihanzhang bug added bug
2015-06-12 06:55:19 shihanzhang description In one openstack with multiple neutron-servers behind a haproxy, in bellow use case, VM can't communicate with others in DVR, reproduce steps: 1. create a subnet and add this subnet to a DVR 2. bulk create two VMs with a specail compute node(this node does not include any VMs in this subnet) in this subnet 3. when a VM port status is ACTIVE, but another VM port status is BUILD, delete the VM which port status is ACTIVE then I can't find the namespace of this DVR router, the reason is that when it 'delete_port', it will check the the all the ports status in this host and subnet using 'check_ports_active_on_host_and_subnet', but it check port ACTIVE status, sometimes a VM's port status will be BUILD def check_ports_active_on_host_and_subnet(self, context, host, port_id, subnet_id): """Check if there is any dvr serviceable port on the subnet_id.""" filter_sub = {'fixed_ips': {'subnet_id': [subnet_id]}} ports = self._core_plugin.get_ports(context, filters=filter_sub) for port in ports: if (n_utils.is_dvr_serviced(port['device_owner']) and port['status'] == 'ACTIVE' and port['binding:host_id'] == host and port['id'] != port_id): LOG.debug('DVR: Active port exists for subnet %(subnet_id)s ' 'on host %(host)s', {'subnet_id': subnet_id, 'host': host}) return True return False In one openstack with multiple neutron-servers behind a haproxy, in bellow use case, VM can't communicate with others in DVR, reproduce steps: 1. create a subnet and add this subnet to a DVR 2. bulk create two VMs with a specail compute node(this node does not include any VMs in this subnet) in this subnet 3. when a VM port status is ACTIVE, but another VM port status is BUILD, delete the VM which port status is ACTIVE then I can't find the namespace of this DVR router, the reason is that when it 'delete_port', it will check the the all the ports status in this host and subnet using 'check_ports_active_on_host_and_subnet', but it check port ACTIVE status, sometimes a VM's port status will be BUILD     def check_ports_active_on_host_and_subnet(self, context, host,port_id, subnet_id):         """Check if there is any dvr serviceable port on the subnet_id."""         filter_sub = {'fixed_ips': {'subnet_id': [subnet_id]}}         ports = self._core_plugin.get_ports(context, filters=filter_sub)         for port in ports:             if (n_utils.is_dvr_serviced(port['device_owner'])                 and port['status'] == 'ACTIVE'                 and port['binding:host_id'] == host                 and port['id'] != port_id):                 LOG.debug('DVR: Active port exists for subnet %(subnet_id)s '                           'on host %(host)s', {'subnet_id': subnet_id,                                        'host': host})                 return True         return False
2015-06-12 06:56:12 shihanzhang description In one openstack with multiple neutron-servers behind a haproxy, in bellow use case, VM can't communicate with others in DVR, reproduce steps: 1. create a subnet and add this subnet to a DVR 2. bulk create two VMs with a specail compute node(this node does not include any VMs in this subnet) in this subnet 3. when a VM port status is ACTIVE, but another VM port status is BUILD, delete the VM which port status is ACTIVE then I can't find the namespace of this DVR router, the reason is that when it 'delete_port', it will check the the all the ports status in this host and subnet using 'check_ports_active_on_host_and_subnet', but it check port ACTIVE status, sometimes a VM's port status will be BUILD     def check_ports_active_on_host_and_subnet(self, context, host,port_id, subnet_id):         """Check if there is any dvr serviceable port on the subnet_id."""         filter_sub = {'fixed_ips': {'subnet_id': [subnet_id]}}         ports = self._core_plugin.get_ports(context, filters=filter_sub)         for port in ports:             if (n_utils.is_dvr_serviced(port['device_owner'])                 and port['status'] == 'ACTIVE'                 and port['binding:host_id'] == host                 and port['id'] != port_id):                 LOG.debug('DVR: Active port exists for subnet %(subnet_id)s '                           'on host %(host)s', {'subnet_id': subnet_id,                                        'host': host})                 return True         return False In one openstack with multiple neutron-servers behind a haproxy, in bellow use case, VM can't communicate with others in DVR, reproduce steps: 1. create a subnet and add this subnet to a DVR 2. bulk create two VMs with a specail compute node(this node does not include any VMs in this subnet) in this subnet 3. when a VM port status is ACTIVE, but another VM port status is BUILD, delete the VM which port status is ACTIVE then I can't find the namespace of this DVR router, the reason is that when it 'delete_port', it will check the the all the ports status in this host and subnet using 'check_ports_active_on_host_and_subnet', but it check port ACTIVE status, sometimes a VM's port status will be BUILD def check_ports_active_on_host_and_subnet(self, context, host,port_id, subnet_id): """Check if there is any dvr serviceable port on the subnet_id.""" filter_sub = {'fixed_ips': {'subnet_id': [subnet_id]}} ports = self._core_plugin.get_ports(context, filters=filter_sub) for port in ports: if (n_utils.is_dvr_serviced(port['device_owner']) and port['status'] == 'ACTIVE' and port['binding:host_id'] == host and port['id'] != port_id): LOG.debug('DVR: Active port exists for subnet %(subnet_id)s ' 'on host %(host)s', {'subnet_id': subnet_id, 'host': host}) return True return False
2015-06-12 06:56:20 shihanzhang neutron: assignee shihanzhang (shihanzhang)
2015-06-12 13:41:11 Assaf Muller tags l3-dvr-backlog
2015-06-15 07:22:18 OpenStack Infra neutron: status New In Progress
2015-06-23 19:34:02 Swaminathan Vasudevan summary VM can't communicate with others in DVR Don't delete DVR Namespace when there are still ports not in ACTIVE state.
2015-06-24 06:37:06 OpenStack Infra neutron: status In Progress Fix Committed
2015-06-24 20:20:40 Thierry Carrez neutron: status Fix Committed Fix Released
2015-06-24 20:20:40 Thierry Carrez neutron: milestone liberty-1
2015-06-26 17:34:45 OpenStack Infra tags l3-dvr-backlog in-feature-qos l3-dvr-backlog
2015-06-26 17:34:46 OpenStack Infra bug watch added http://bugs.python.org/issue21239
2015-06-30 02:34:14 OpenStack Infra tags in-feature-qos l3-dvr-backlog in-feature-pecan in-feature-qos l3-dvr-backlog
2015-10-15 12:19:53 Thierry Carrez neutron: milestone liberty-1 7.0.0