Activity log for bug #1825034

Date Who What changed Old value New value Message
2019-04-16 19:17:03 Matt Riedemann bug added bug
2019-04-16 19:18:03 Matt Riedemann nova: status New Confirmed
2019-04-16 19:18:05 Matt Riedemann nova: importance Undecided High
2019-04-16 19:18:20 Matt Riedemann tags upgrade
2019-04-16 19:24:32 s10 bug added subscriber s10
2019-04-16 19:48:56 Matt Riedemann description I found this bug while trying to recreate bug 1825018 with a functional test. The fill_virtual_interface_list online data migration creates a fake mostly empty instance record to satisfy a foreign key constraint in the virtual_interfaces table which is used as a marker when paging across cells to fulfill the migration. The problem is if you list deleted servers (as admin) with the all_tenants=1 and deleted=1 filters, the API will fail with a 500 error trying to load the instance.flavor field: b'2019-04-16 15:08:53,720 ERROR [nova.api.openstack.wsgi] Unexpected exception in API method' b'Traceback (most recent call last):' b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/urllib3/connectionpool.py", line 377, in _make_request' b' httplib_response = conn.getresponse(buffering=True)' b"TypeError: getresponse() got an unexpected keyword argument 'buffering'" b'' b'During handling of the above exception, another exception occurred:' b'' b'Traceback (most recent call last):' b' File "/home/osboxes/git/nova/nova/api/openstack/wsgi.py", line 671, in wrapped' b' return f(*args, **kwargs)' b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper' b' return func(*args, **kwargs)' b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper' b' return func(*args, **kwargs)' b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper' b' return func(*args, **kwargs)' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/servers.py", line 136, in detail' b' servers = self._get_servers(req, is_detail=True)' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/servers.py", line 330, in _get_servers' b' req, instance_list, cell_down_support=cell_down_support)' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 390, in detail' b' cell_down_support=cell_down_support)' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 425, in _list_view' b' for server in servers]' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 425, in <listcomp>' b' for server in servers]' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 222, in show' b' show_extra_specs),' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 494, in _get_flavor' b' instance_type = instance.get_flavor()' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1191, in get_flavor' b' return getattr(self, attr)' b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 67, in getter' b' self.obj_load_attr(name)' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1114, in obj_load_attr' b' self._obj_load_attr(attrname)' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1158, in _obj_load_attr' b' self._load_flavor()' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 967, in _load_flavor' b' self.flavor = instance.flavor' b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 67, in getter' b' self.obj_load_attr(name)' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1101, in obj_load_attr' b' objtype=self.obj_name())' b'nova.exception.OrphanedObjectError: Cannot call obj_load_attr on orphaned Instance object' b'2019-04-16 15:08:53,722 INFO [nova.api.openstack.wsgi] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.' b"<class 'nova.exception.OrphanedObjectError'>" b'2019-04-16 15:08:53,723 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/6f70656e737461636b20342065766572/servers/detail?all_tenants=1&deleted=1" status: 500 len: 208 microversion: 2.1 time: 0.138964' I found this bug while trying to recreate bug 1824435 with a functional test. The fill_virtual_interface_list online data migration creates a fake mostly empty instance record to satisfy a foreign key constraint in the virtual_interfaces table which is used as a marker when paging across cells to fulfill the migration. The problem is if you list deleted servers (as admin) with the all_tenants=1 and deleted=1 filters, the API will fail with a 500 error trying to load the instance.flavor field:     b'2019-04-16 15:08:53,720 ERROR [nova.api.openstack.wsgi] Unexpected exception in API method'     b'Traceback (most recent call last):'     b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/urllib3/connectionpool.py", line 377, in _make_request'     b' httplib_response = conn.getresponse(buffering=True)'     b"TypeError: getresponse() got an unexpected keyword argument 'buffering'"     b''     b'During handling of the above exception, another exception occurred:'     b''     b'Traceback (most recent call last):'     b' File "/home/osboxes/git/nova/nova/api/openstack/wsgi.py", line 671, in wrapped'     b' return f(*args, **kwargs)'     b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper'     b' return func(*args, **kwargs)'     b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper'     b' return func(*args, **kwargs)'     b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper'     b' return func(*args, **kwargs)'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/servers.py", line 136, in detail'     b' servers = self._get_servers(req, is_detail=True)'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/servers.py", line 330, in _get_servers'     b' req, instance_list, cell_down_support=cell_down_support)'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 390, in detail'     b' cell_down_support=cell_down_support)'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 425, in _list_view'     b' for server in servers]'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 425, in <listcomp>'     b' for server in servers]'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 222, in show'     b' show_extra_specs),'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 494, in _get_flavor'     b' instance_type = instance.get_flavor()'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1191, in get_flavor'     b' return getattr(self, attr)'     b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 67, in getter'     b' self.obj_load_attr(name)'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1114, in obj_load_attr'     b' self._obj_load_attr(attrname)'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1158, in _obj_load_attr'     b' self._load_flavor()'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 967, in _load_flavor'     b' self.flavor = instance.flavor'     b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 67, in getter'     b' self.obj_load_attr(name)'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1101, in obj_load_attr'     b' objtype=self.obj_name())'     b'nova.exception.OrphanedObjectError: Cannot call obj_load_attr on orphaned Instance object'     b'2019-04-16 15:08:53,722 INFO [nova.api.openstack.wsgi] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.'     b"<class 'nova.exception.OrphanedObjectError'>"     b'2019-04-16 15:08:53,723 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/6f70656e737461636b20342065766572/servers/detail?all_tenants=1&deleted=1" status: 500 len: 208 microversion: 2.1 time: 0.138964'
2019-04-16 20:43:38 Matt Riedemann nominated for series nova/stein
2019-04-16 20:43:38 Matt Riedemann bug task added nova/stein
2019-04-16 20:43:44 Matt Riedemann nova/stein: status New Confirmed
2019-04-16 20:43:47 Matt Riedemann nova/stein: importance Undecided High
2019-04-16 21:50:17 melanie witt bug added subscriber melanie witt
2019-04-16 21:51:14 OpenStack Infra nova: status Confirmed In Progress
2019-04-16 21:51:14 OpenStack Infra nova: assignee Matt Riedemann (mriedem)
2019-05-06 18:02:09 OpenStack Infra nova/stein: status Confirmed In Progress
2019-05-06 18:02:09 OpenStack Infra nova/stein: assignee Matt Riedemann (mriedem)
2019-05-06 22:52:59 OpenStack Infra nova: status In Progress Fix Released
2019-05-22 10:09:06 OpenStack Infra tags upgrade in-stable-stein upgrade
2019-05-22 14:53:36 OpenStack Infra nova/stein: status In Progress Fix Committed
2021-06-28 15:58:58 Edward Hope-Morley description I found this bug while trying to recreate bug 1824435 with a functional test. The fill_virtual_interface_list online data migration creates a fake mostly empty instance record to satisfy a foreign key constraint in the virtual_interfaces table which is used as a marker when paging across cells to fulfill the migration. The problem is if you list deleted servers (as admin) with the all_tenants=1 and deleted=1 filters, the API will fail with a 500 error trying to load the instance.flavor field:     b'2019-04-16 15:08:53,720 ERROR [nova.api.openstack.wsgi] Unexpected exception in API method'     b'Traceback (most recent call last):'     b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/urllib3/connectionpool.py", line 377, in _make_request'     b' httplib_response = conn.getresponse(buffering=True)'     b"TypeError: getresponse() got an unexpected keyword argument 'buffering'"     b''     b'During handling of the above exception, another exception occurred:'     b''     b'Traceback (most recent call last):'     b' File "/home/osboxes/git/nova/nova/api/openstack/wsgi.py", line 671, in wrapped'     b' return f(*args, **kwargs)'     b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper'     b' return func(*args, **kwargs)'     b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper'     b' return func(*args, **kwargs)'     b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper'     b' return func(*args, **kwargs)'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/servers.py", line 136, in detail'     b' servers = self._get_servers(req, is_detail=True)'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/servers.py", line 330, in _get_servers'     b' req, instance_list, cell_down_support=cell_down_support)'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 390, in detail'     b' cell_down_support=cell_down_support)'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 425, in _list_view'     b' for server in servers]'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 425, in <listcomp>'     b' for server in servers]'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 222, in show'     b' show_extra_specs),'     b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 494, in _get_flavor'     b' instance_type = instance.get_flavor()'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1191, in get_flavor'     b' return getattr(self, attr)'     b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 67, in getter'     b' self.obj_load_attr(name)'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1114, in obj_load_attr'     b' self._obj_load_attr(attrname)'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1158, in _obj_load_attr'     b' self._load_flavor()'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 967, in _load_flavor'     b' self.flavor = instance.flavor'     b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 67, in getter'     b' self.obj_load_attr(name)'     b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1101, in obj_load_attr'     b' objtype=self.obj_name())'     b'nova.exception.OrphanedObjectError: Cannot call obj_load_attr on orphaned Instance object'     b'2019-04-16 15:08:53,722 INFO [nova.api.openstack.wsgi] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.'     b"<class 'nova.exception.OrphanedObjectError'>"     b'2019-04-16 15:08:53,723 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/6f70656e737461636b20342065766572/servers/detail?all_tenants=1&deleted=1" status: 500 len: 208 microversion: 2.1 time: 0.138964' [Impact] * During periodic task _heal_instance_info_cache the instance_info_caches are not updated using instance port_ids taken from neutron, but from nova db. * This causes that existing VMs to loose their network interfaces after reboot. [Test Plan] * This bug is reproducible on Bionic/Queens clouds. 1) Deploy the following Juju bundle: https://paste.ubuntu.com/p/HgsqZfsDGh/ 2) Run the following script: https://paste.ubuntu.com/p/c4VDkqyR2z/ 3) If the script finishes with "Port not found" , the bug is still present. [Where problems could occur] Instances created prior to the Openstack Newton release that have more than one interface will not have associated information in the virtual_interfaces table that is required to repopulate the cache with interfaces in the same order they were attached prior. In the unlikely event that this occurs and you are using Openstack release Queen or Rocky, it will be necessary to either manually populate this table. Openstack Stein has a patch that adds support for generating this data. Since as things stand the guest will be unable to identify it's network information at all in the event the cache gets purged and given the hopefully low risk that a vm was created prior to Newton we hope the potential for this regression is very low. ------------------------------------------------------------------------------ Description =========== During periodic task _heal_instance_info_cache the instance_info_caches are not updated using instance port_ids taken from neutron, but from nova db. Sometimes, perhaps because of some race-condition, its possible to lose some ports from instance_info_caches. Periodic task _heal_instance_info_cache should clean this up (add missing records), but in fact it's not working this way. How it looks now? ================= _heal_instance_info_cache during crontask: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525 is using network_api to get instance_nw_info (instance_info_caches): try: # Call to network API to get instance info.. this will # force an update to the instance's info_cache self.network_api.get_instance_nw_info(context, instance) self.network_api.get_instance_nw_info() is listed below: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1377 and it uses _build_network_info_model() without networks and port_ids parameters (because we're not adding any new interface to instance): https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356 Next: _gather_port_ids_and_networks() generates the list of instance networks and port_ids: networks, port_ids = self._gather_port_ids_and_networks( context, instance, networks, port_ids, client) https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2389-L2390 https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1393 As we see that _gather_port_ids_and_networks() takes the port list from DB: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/objects/instance.py#L1173-L1176 And thats it. When we lose a port its not possible to add it again with this periodic task. The only way is to clean device_id field in neutron port object and re-attach the interface using `nova interface-attach`. When the interface is missing and there is no port configured on compute host (for example after compute reboot) - interface is not added to instance and from neutron point of view port state is DOWN. When the interface is missing in cache and we reboot hard the instance - its not added as tapinterface in xml file = we don't have the network on host. Steps to reproduce ================== 1. Spawn devstack 2. Spawn VM inside devstack with multiple ports (for example also from 2 different networks) 3. Update the DB row, drop one interface from interfaces_list 4. Hard-Reboot the instance 5. See that nova list shows instance without one address, but nova interface-list shows all addresses 6. See that one port is missing in instance xml files 7. In theory the _heal_instance_info_cache should fix this things, it relies on memory, not on the fresh list of instance ports taken from neutron. Reproduced Example ================== 1. Spawn VM with 1 private network port nova boot --flavor m1.small --image cirros-0.3.5-x86_64-disk --nic net-name=private test-2 2. Attach ports to have 2 private and 2 public interfaces nova list: | a64ed18d-9868-4bf0-90d3-d710d278922d | test-2 | ACTIVE | - | Running | public=2001:db8::e, 172.24.4.15, 2001:db8::c, 172.24.4.16; private=fdda:5d77:e18e:0:f816:3eff:fee8:3333, 10.0.0.3, fdda:5d77:e18e:0:f816:3eff:fe53:231c, 10.0.0.5 | So we see 4 ports: stack@mjozefcz-devstack-ptg:~$ nova interface-list a64ed18d-9868-4bf0-90d3-d710d278922d +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ | ACTIVE | 6c230305-43f8-42ec-9936-61fe67551168 | 96343d33-5dd2-4289-b0cc-e6c664c2ddd9 | 10.0.0.3,fdda:5d77:e18e:0:f816:3eff:fee8:3333 | fa:16:3e:e8:33:33 | | ACTIVE | 71e6c6ad-8016-450f-93f2-75e7e014084d | 9e702a96-2744-40a2-a649-33f935d83ad3 | 172.24.4.16,2001:db8::c | fa:16:3e:6d:dc:85 | | ACTIVE | a74c9ee8-c426-48ef-890f-3988ecbe95ff | 9e702a96-2744-40a2-a649-33f935d83ad3 | 172.24.4.15,2001:db8::e | fa:16:3e:cf:0c:e0 | | ACTIVE | b89d6863-fb4c-405c-89f9-698bd9773ad6 | 96343d33-5dd2-4289-b0cc-e6c664c2ddd9 | 10.0.0.5,fdda:5d77:e18e:0:f816:3eff:fe53:231c | fa:16:3e:53:23:1c | +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ stack@mjozefcz-devstack-ptg:~$ We can also see 4 tap interfaces in xml file: stack@mjozefcz-devstack-ptg:~$ sudo virsh dumpxml instance-00000002 | grep -i tap <target dev='tap6c230305-43'/> <target dev='tapb89d6863-fb'/> <target dev='tapa74c9ee8-c4'/> <target dev='tap71e6c6ad-80'/> stack@mjozefcz-devstack-ptg:~$ 3. Now lets 'corrupt' the instance_info_caches for this specific VM. We also noticed some race-condition that cause the same problem, but we're unable to reproduce it in devel environment. Original one: --- mysql> select * from instance_info_caches where instance_uuid="a64ed18d-9868-4bf0-90d3-d710d278922d"\G; *************************** 1. row *************************** created_at: 2018-02-26 21:25:31 updated_at: 2018-02-26 21:29:17 deleted_at: NULL id: 2 network_info: [{"profile": {}, "ovs_interfaceid": "6c230305-43f8-42ec-9936-61fe67551168", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "fdda:5d77:e18e:0:f816:3eff:fee8:3333"}], "version": 6, "meta": {"ipv6_address_mode": "slaac", "dhcp_server": "fdda:5d77:e18e:0:f816:3eff:fee7:b04"}, "dns": [], "routes": [], "cidr": "fdda:5d77:e18e::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "fdda:5d77:e18e::1"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.0.0.3"}], "version": 4, "meta": {"dhcp_server": "10.0.0.2"}, "dns": [], "routes": [], "cidr": "10.0.0.0/26", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"}}], "meta": {"injected": false, "tenant_id": "0314943f52014a5b9bc56b73bec475e6", "mtu": 1450}, "id": "96343d33-5dd2-4289-b0cc-e6c664c2ddd9", "label": "private"}, "devname": "tap6c230305-43", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:e8:33:33", "active": true, "type": "ovs", "id": "6c230305-43f8-42ec-9936-61fe67551168", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "b89d6863-fb4c-405c-89f9-698bd9773ad6", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "fdda:5d77:e18e:0:f816:3eff:fe53:231c"}], "version": 6, "meta": {"ipv6_address_mode": "slaac", "dhcp_server": "fdda:5d77:e18e:0:f816:3eff:fee7:b04"}, "dns": [], "routes": [], "cidr": "fdda:5d77:e18e::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "fdda:5d77:e18e::1"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.0.0.5"}], "version": 4, "meta": {"dhcp_server": "10.0.0.2"}, "dns": [], "routes": [], "cidr": "10.0.0.0/26", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"}}], "meta": {"injected": false, "tenant_id": "0314943f52014a5b9bc56b73bec475e6", "mtu": 1450}, "id": "96343d33-5dd2-4289-b0cc-e6c664c2ddd9", "label": "private"}, "devname": "tapb89d6863-fb", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:53:23:1c", "active": true, "type": "ovs", "id": "b89d6863-fb4c-405c-89f9-698bd9773ad6", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::e"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.15"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "9c6f74dab29f4c738e82320075fa1f57", "mtu": 1500}, "id": "9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, "devname": "tapa74c9ee8-c4", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:cf:0c:e0", "active": true, "type": "ovs", "id": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "71e6c6ad-8016-450f-93f2-75e7e014084d", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::c"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "9c6f74dab29f4c738e82320075fa1f57", "mtu": 1500}, "id": "9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, "devname": "tap71e6c6ad-80", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:6d:dc:85", "active": true, "type": "ovs", "id": "71e6c6ad-8016-450f-93f2-75e7e014084d", "qbg_params": null}] instance_uuid: a64ed18d-9868-4bf0-90d3-d710d278922d deleted: 0 1 row in set (0.00 sec) ---- Modified one (I removed first port from list): tap6c230305-43 ---- mysql> select * from instance_info_caches where instance_uuid="a64ed18d-9868-4bf0-90d3-d710d278922d"\G; *************************** 1. row *************************** created_at: 2018-02-26 21:25:31 updated_at: 2018-02-26 21:29:17 deleted_at: NULL id: 2 network_info: [{"profile": {}, "ovs_interfaceid": "b89d6863-fb4c-405c-89f9-698bd9773ad6", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "fdda:5d77:e18e:0:f816:3eff:fe53:231c"}], "version": 6, "meta": {"ipv6_address_mode": "slaac", "dhcp_server": "fdda:5d77:e18e:0:f816:3eff:fee7:b04"}, "dns": [], "routes": [], "cidr": "fdda:5d77:e18e::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "fdda:5d77:e18e::1"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.0.0.5"}], "version": 4, "meta": {"dhcp_server": "10.0.0.2"}, "dns": [], "routes": [], "cidr": "10.0.0.0/26", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"}}], "meta": {"injected": false, "tenant_id": "0314943f52014a5b9bc56b73bec475e6", "mtu": 1450}, "id": "96343d33-5dd2-4289-b0cc-e6c664c2ddd9", "label": "private"}, "devname": "tapb89d6863-fb", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:53:23:1c", "active": true, "type": "ovs", "id": "b89d6863-fb4c-405c-89f9-698bd9773ad6", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::e"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.15"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "9c6f74dab29f4c738e82320075fa1f57", "mtu": 1500}, "id": "9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, "devname": "tapa74c9ee8-c4", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:cf:0c:e0", "active": true, "type": "ovs", "id": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "71e6c6ad-8016-450f-93f2-75e7e014084d", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::c"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "9c6f74dab29f4c738e82320075fa1f57", "mtu": 1500}, "id": "9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, "devname": "tap71e6c6ad-80", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:6d:dc:85", "active": true, "type": "ovs", "id": "71e6c6ad-8016-450f-93f2-75e7e014084d", "qbg_params": null}] instance_uuid: a64ed18d-9868-4bf0-90d3-d710d278922d deleted: 0 ---- 4. Now lets take a look on `nova list`: stack@mjozefcz-devstack-ptg:~$ nova list | grep test-2 | a64ed18d-9868-4bf0-90d3-d710d278922d | test-2 | ACTIVE | - | Running | public=2001:db8::e, 172.24.4.15, 2001:db8::c, 172.24.4.16; private=fdda:5d77:e18e:0:f816:3eff:fe53:231c, 10.0.0.5 | stack@mjozefcz-devstack-ptg:~$ So as you see we missed one interface (private). Nova interface-list shows it (because it calls neutron instead nova itself): stack@mjozefcz-devstack-ptg:~$ nova interface-list a64ed18d-9868-4bf0-90d3-d710d278922d +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ | ACTIVE | 6c230305-43f8-42ec-9936-61fe67551168 | 96343d33-5dd2-4289-b0cc-e6c664c2ddd9 | 10.0.0.3,fdda:5d77:e18e:0:f816:3eff:fee8:3333 | fa:16:3e:e8:33:33 | | ACTIVE | 71e6c6ad-8016-450f-93f2-75e7e014084d | 9e702a96-2744-40a2-a649-33f935d83ad3 | 172.24.4.16,2001:db8::c | fa:16:3e:6d:dc:85 | | ACTIVE | a74c9ee8-c426-48ef-890f-3988ecbe95ff | 9e702a96-2744-40a2-a649-33f935d83ad3 | 172.24.4.15,2001:db8::e | fa:16:3e:cf:0c:e0 | | ACTIVE | b89d6863-fb4c-405c-89f9-698bd9773ad6 | 96343d33-5dd2-4289-b0cc-e6c664c2ddd9 | 10.0.0.5,fdda:5d77:e18e:0:f816:3eff:fe53:231c | fa:16:3e:53:23:1c | +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ stack@mjozefcz-devstack-ptg:~$ 5. During this time check the logs - yes, the _heal_instance_info_cache has been running for a while but without success - stil missing port in instance_info_caches table: Feb 26 22:12:03 mjozefcz-devstack-ptg nova-compute[27459]: DEBUG oslo_service.periodic_task [None req-ac707da5-3413-412c-b314-ab38db2134bc service nova] Running periodic task ComputeManager._heal_instance_info_cache {{(pid=27459) run_periodic_tasks /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215}} Feb 26 22:12:03 mjozefcz-devstack-ptg nova-compute[27459]: DEBUG nova.compute.manager [None req-ac707da5-3413-412c-b314-ab38db2134bc service nova] Starting heal instance info cache {{(pid=27459) _heal_instance_info_cache /opt/stack/nova/nova/compute/manager.py:6541}} Feb 26 22:12:04 mjozefcz-devstack-ptg nova-compute[27459]: DEBUG nova.compute.manager [None req-ac707da5-3413-412c-b314-ab38db2134bc service nova] [instance: a64ed18d-9868-4bf0-90d3-d710d278922d] Updated the network info_cache for instance {{(pid=27459) _heal_instance_info_cache /opt/stack/nova/nova/compute/manager.py:6603}} 5. Ok, so lets pretend that customer restart the VM. stack@mjozefcz-devstack-ptg:~$ nova reboot a64ed18d-9868-4bf0-90d3-d710d278922d --hard Request to reboot server <Server: test-2> has been accepted. 6. And now check connected interfaces - WOOPS there is no `tap6c230305-43` on the list ;( stack@mjozefcz-devstack-ptg:~$ sudo virsh dumpxml instance-00000002 | grep -i tap <target dev='tapb89d6863-fb'/> <target dev='tapa74c9ee8-c4'/> <target dev='tap71e6c6ad-80'/> Environment =========== Nova master branch, devstack
2021-06-28 16:00:27 Edward Hope-Morley description [Impact] * During periodic task _heal_instance_info_cache the instance_info_caches are not updated using instance port_ids taken from neutron, but from nova db. * This causes that existing VMs to loose their network interfaces after reboot. [Test Plan] * This bug is reproducible on Bionic/Queens clouds. 1) Deploy the following Juju bundle: https://paste.ubuntu.com/p/HgsqZfsDGh/ 2) Run the following script: https://paste.ubuntu.com/p/c4VDkqyR2z/ 3) If the script finishes with "Port not found" , the bug is still present. [Where problems could occur] Instances created prior to the Openstack Newton release that have more than one interface will not have associated information in the virtual_interfaces table that is required to repopulate the cache with interfaces in the same order they were attached prior. In the unlikely event that this occurs and you are using Openstack release Queen or Rocky, it will be necessary to either manually populate this table. Openstack Stein has a patch that adds support for generating this data. Since as things stand the guest will be unable to identify it's network information at all in the event the cache gets purged and given the hopefully low risk that a vm was created prior to Newton we hope the potential for this regression is very low. ------------------------------------------------------------------------------ Description =========== During periodic task _heal_instance_info_cache the instance_info_caches are not updated using instance port_ids taken from neutron, but from nova db. Sometimes, perhaps because of some race-condition, its possible to lose some ports from instance_info_caches. Periodic task _heal_instance_info_cache should clean this up (add missing records), but in fact it's not working this way. How it looks now? ================= _heal_instance_info_cache during crontask: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525 is using network_api to get instance_nw_info (instance_info_caches): try: # Call to network API to get instance info.. this will # force an update to the instance's info_cache self.network_api.get_instance_nw_info(context, instance) self.network_api.get_instance_nw_info() is listed below: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1377 and it uses _build_network_info_model() without networks and port_ids parameters (because we're not adding any new interface to instance): https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356 Next: _gather_port_ids_and_networks() generates the list of instance networks and port_ids: networks, port_ids = self._gather_port_ids_and_networks( context, instance, networks, port_ids, client) https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2389-L2390 https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1393 As we see that _gather_port_ids_and_networks() takes the port list from DB: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/objects/instance.py#L1173-L1176 And thats it. When we lose a port its not possible to add it again with this periodic task. The only way is to clean device_id field in neutron port object and re-attach the interface using `nova interface-attach`. When the interface is missing and there is no port configured on compute host (for example after compute reboot) - interface is not added to instance and from neutron point of view port state is DOWN. When the interface is missing in cache and we reboot hard the instance - its not added as tapinterface in xml file = we don't have the network on host. Steps to reproduce ================== 1. Spawn devstack 2. Spawn VM inside devstack with multiple ports (for example also from 2 different networks) 3. Update the DB row, drop one interface from interfaces_list 4. Hard-Reboot the instance 5. See that nova list shows instance without one address, but nova interface-list shows all addresses 6. See that one port is missing in instance xml files 7. In theory the _heal_instance_info_cache should fix this things, it relies on memory, not on the fresh list of instance ports taken from neutron. Reproduced Example ================== 1. Spawn VM with 1 private network port nova boot --flavor m1.small --image cirros-0.3.5-x86_64-disk --nic net-name=private test-2 2. Attach ports to have 2 private and 2 public interfaces nova list: | a64ed18d-9868-4bf0-90d3-d710d278922d | test-2 | ACTIVE | - | Running | public=2001:db8::e, 172.24.4.15, 2001:db8::c, 172.24.4.16; private=fdda:5d77:e18e:0:f816:3eff:fee8:3333, 10.0.0.3, fdda:5d77:e18e:0:f816:3eff:fe53:231c, 10.0.0.5 | So we see 4 ports: stack@mjozefcz-devstack-ptg:~$ nova interface-list a64ed18d-9868-4bf0-90d3-d710d278922d +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ | ACTIVE | 6c230305-43f8-42ec-9936-61fe67551168 | 96343d33-5dd2-4289-b0cc-e6c664c2ddd9 | 10.0.0.3,fdda:5d77:e18e:0:f816:3eff:fee8:3333 | fa:16:3e:e8:33:33 | | ACTIVE | 71e6c6ad-8016-450f-93f2-75e7e014084d | 9e702a96-2744-40a2-a649-33f935d83ad3 | 172.24.4.16,2001:db8::c | fa:16:3e:6d:dc:85 | | ACTIVE | a74c9ee8-c426-48ef-890f-3988ecbe95ff | 9e702a96-2744-40a2-a649-33f935d83ad3 | 172.24.4.15,2001:db8::e | fa:16:3e:cf:0c:e0 | | ACTIVE | b89d6863-fb4c-405c-89f9-698bd9773ad6 | 96343d33-5dd2-4289-b0cc-e6c664c2ddd9 | 10.0.0.5,fdda:5d77:e18e:0:f816:3eff:fe53:231c | fa:16:3e:53:23:1c | +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ stack@mjozefcz-devstack-ptg:~$ We can also see 4 tap interfaces in xml file: stack@mjozefcz-devstack-ptg:~$ sudo virsh dumpxml instance-00000002 | grep -i tap <target dev='tap6c230305-43'/> <target dev='tapb89d6863-fb'/> <target dev='tapa74c9ee8-c4'/> <target dev='tap71e6c6ad-80'/> stack@mjozefcz-devstack-ptg:~$ 3. Now lets 'corrupt' the instance_info_caches for this specific VM. We also noticed some race-condition that cause the same problem, but we're unable to reproduce it in devel environment. Original one: --- mysql> select * from instance_info_caches where instance_uuid="a64ed18d-9868-4bf0-90d3-d710d278922d"\G; *************************** 1. row *************************** created_at: 2018-02-26 21:25:31 updated_at: 2018-02-26 21:29:17 deleted_at: NULL id: 2 network_info: [{"profile": {}, "ovs_interfaceid": "6c230305-43f8-42ec-9936-61fe67551168", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "fdda:5d77:e18e:0:f816:3eff:fee8:3333"}], "version": 6, "meta": {"ipv6_address_mode": "slaac", "dhcp_server": "fdda:5d77:e18e:0:f816:3eff:fee7:b04"}, "dns": [], "routes": [], "cidr": "fdda:5d77:e18e::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "fdda:5d77:e18e::1"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.0.0.3"}], "version": 4, "meta": {"dhcp_server": "10.0.0.2"}, "dns": [], "routes": [], "cidr": "10.0.0.0/26", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"}}], "meta": {"injected": false, "tenant_id": "0314943f52014a5b9bc56b73bec475e6", "mtu": 1450}, "id": "96343d33-5dd2-4289-b0cc-e6c664c2ddd9", "label": "private"}, "devname": "tap6c230305-43", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:e8:33:33", "active": true, "type": "ovs", "id": "6c230305-43f8-42ec-9936-61fe67551168", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "b89d6863-fb4c-405c-89f9-698bd9773ad6", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "fdda:5d77:e18e:0:f816:3eff:fe53:231c"}], "version": 6, "meta": {"ipv6_address_mode": "slaac", "dhcp_server": "fdda:5d77:e18e:0:f816:3eff:fee7:b04"}, "dns": [], "routes": [], "cidr": "fdda:5d77:e18e::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "fdda:5d77:e18e::1"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.0.0.5"}], "version": 4, "meta": {"dhcp_server": "10.0.0.2"}, "dns": [], "routes": [], "cidr": "10.0.0.0/26", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"}}], "meta": {"injected": false, "tenant_id": "0314943f52014a5b9bc56b73bec475e6", "mtu": 1450}, "id": "96343d33-5dd2-4289-b0cc-e6c664c2ddd9", "label": "private"}, "devname": "tapb89d6863-fb", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:53:23:1c", "active": true, "type": "ovs", "id": "b89d6863-fb4c-405c-89f9-698bd9773ad6", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::e"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.15"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "9c6f74dab29f4c738e82320075fa1f57", "mtu": 1500}, "id": "9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, "devname": "tapa74c9ee8-c4", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:cf:0c:e0", "active": true, "type": "ovs", "id": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "71e6c6ad-8016-450f-93f2-75e7e014084d", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::c"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "9c6f74dab29f4c738e82320075fa1f57", "mtu": 1500}, "id": "9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, "devname": "tap71e6c6ad-80", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:6d:dc:85", "active": true, "type": "ovs", "id": "71e6c6ad-8016-450f-93f2-75e7e014084d", "qbg_params": null}] instance_uuid: a64ed18d-9868-4bf0-90d3-d710d278922d deleted: 0 1 row in set (0.00 sec) ---- Modified one (I removed first port from list): tap6c230305-43 ---- mysql> select * from instance_info_caches where instance_uuid="a64ed18d-9868-4bf0-90d3-d710d278922d"\G; *************************** 1. row *************************** created_at: 2018-02-26 21:25:31 updated_at: 2018-02-26 21:29:17 deleted_at: NULL id: 2 network_info: [{"profile": {}, "ovs_interfaceid": "b89d6863-fb4c-405c-89f9-698bd9773ad6", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "fdda:5d77:e18e:0:f816:3eff:fe53:231c"}], "version": 6, "meta": {"ipv6_address_mode": "slaac", "dhcp_server": "fdda:5d77:e18e:0:f816:3eff:fee7:b04"}, "dns": [], "routes": [], "cidr": "fdda:5d77:e18e::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "fdda:5d77:e18e::1"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.0.0.5"}], "version": 4, "meta": {"dhcp_server": "10.0.0.2"}, "dns": [], "routes": [], "cidr": "10.0.0.0/26", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"}}], "meta": {"injected": false, "tenant_id": "0314943f52014a5b9bc56b73bec475e6", "mtu": 1450}, "id": "96343d33-5dd2-4289-b0cc-e6c664c2ddd9", "label": "private"}, "devname": "tapb89d6863-fb", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:53:23:1c", "active": true, "type": "ovs", "id": "b89d6863-fb4c-405c-89f9-698bd9773ad6", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::e"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.15"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "9c6f74dab29f4c738e82320075fa1f57", "mtu": 1500}, "id": "9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, "devname": "tapa74c9ee8-c4", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:cf:0c:e0", "active": true, "type": "ovs", "id": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", "qbg_params": null}, {"profile": {}, "ovs_interfaceid": "71e6c6ad-8016-450f-93f2-75e7e014084d", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8::c"}], "version": 6, "meta": {}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": "9c6f74dab29f4c738e82320075fa1f57", "mtu": 1500}, "id": "9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, "devname": "tap71e6c6ad-80", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:6d:dc:85", "active": true, "type": "ovs", "id": "71e6c6ad-8016-450f-93f2-75e7e014084d", "qbg_params": null}] instance_uuid: a64ed18d-9868-4bf0-90d3-d710d278922d deleted: 0 ---- 4. Now lets take a look on `nova list`: stack@mjozefcz-devstack-ptg:~$ nova list | grep test-2 | a64ed18d-9868-4bf0-90d3-d710d278922d | test-2 | ACTIVE | - | Running | public=2001:db8::e, 172.24.4.15, 2001:db8::c, 172.24.4.16; private=fdda:5d77:e18e:0:f816:3eff:fe53:231c, 10.0.0.5 | stack@mjozefcz-devstack-ptg:~$ So as you see we missed one interface (private). Nova interface-list shows it (because it calls neutron instead nova itself): stack@mjozefcz-devstack-ptg:~$ nova interface-list a64ed18d-9868-4bf0-90d3-d710d278922d +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ | ACTIVE | 6c230305-43f8-42ec-9936-61fe67551168 | 96343d33-5dd2-4289-b0cc-e6c664c2ddd9 | 10.0.0.3,fdda:5d77:e18e:0:f816:3eff:fee8:3333 | fa:16:3e:e8:33:33 | | ACTIVE | 71e6c6ad-8016-450f-93f2-75e7e014084d | 9e702a96-2744-40a2-a649-33f935d83ad3 | 172.24.4.16,2001:db8::c | fa:16:3e:6d:dc:85 | | ACTIVE | a74c9ee8-c426-48ef-890f-3988ecbe95ff | 9e702a96-2744-40a2-a649-33f935d83ad3 | 172.24.4.15,2001:db8::e | fa:16:3e:cf:0c:e0 | | ACTIVE | b89d6863-fb4c-405c-89f9-698bd9773ad6 | 96343d33-5dd2-4289-b0cc-e6c664c2ddd9 | 10.0.0.5,fdda:5d77:e18e:0:f816:3eff:fe53:231c | fa:16:3e:53:23:1c | +------------+--------------------------------------+--------------------------------------+-----------------------------------------------+-------------------+ stack@mjozefcz-devstack-ptg:~$ 5. During this time check the logs - yes, the _heal_instance_info_cache has been running for a while but without success - stil missing port in instance_info_caches table: Feb 26 22:12:03 mjozefcz-devstack-ptg nova-compute[27459]: DEBUG oslo_service.periodic_task [None req-ac707da5-3413-412c-b314-ab38db2134bc service nova] Running periodic task ComputeManager._heal_instance_info_cache {{(pid=27459) run_periodic_tasks /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215}} Feb 26 22:12:03 mjozefcz-devstack-ptg nova-compute[27459]: DEBUG nova.compute.manager [None req-ac707da5-3413-412c-b314-ab38db2134bc service nova] Starting heal instance info cache {{(pid=27459) _heal_instance_info_cache /opt/stack/nova/nova/compute/manager.py:6541}} Feb 26 22:12:04 mjozefcz-devstack-ptg nova-compute[27459]: DEBUG nova.compute.manager [None req-ac707da5-3413-412c-b314-ab38db2134bc service nova] [instance: a64ed18d-9868-4bf0-90d3-d710d278922d] Updated the network info_cache for instance {{(pid=27459) _heal_instance_info_cache /opt/stack/nova/nova/compute/manager.py:6603}} 5. Ok, so lets pretend that customer restart the VM. stack@mjozefcz-devstack-ptg:~$ nova reboot a64ed18d-9868-4bf0-90d3-d710d278922d --hard Request to reboot server <Server: test-2> has been accepted. 6. And now check connected interfaces - WOOPS there is no `tap6c230305-43` on the list ;( stack@mjozefcz-devstack-ptg:~$ sudo virsh dumpxml instance-00000002 | grep -i tap <target dev='tapb89d6863-fb'/> <target dev='tapa74c9ee8-c4'/> <target dev='tap71e6c6ad-80'/> Environment =========== Nova master branch, devstack I found this bug while trying to recreate bug 1825018 with a functional test. The fill_virtual_interface_list online data migration creates a fake mostly empty instance record to satisfy a foreign key constraint in the virtual_interfaces table which is used as a marker when paging across cells to fulfill the migration. The problem is if you list deleted servers (as admin) with the all_tenants=1 and deleted=1 filters, the API will fail with a 500 error trying to load the instance.flavor field: b'2019-04-16 15:08:53,720 ERROR [nova.api.openstack.wsgi] Unexpected exception in API method' b'Traceback (most recent call last):' b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/urllib3/connectionpool.py", line 377, in _make_request' b' httplib_response = conn.getresponse(buffering=True)' b"TypeError: getresponse() got an unexpected keyword argument 'buffering'" b'' b'During handling of the above exception, another exception occurred:' b'' b'Traceback (most recent call last):' b' File "/home/osboxes/git/nova/nova/api/openstack/wsgi.py", line 671, in wrapped' b' return f(*args, **kwargs)' b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper' b' return func(*args, **kwargs)' b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper' b' return func(*args, **kwargs)' b' File "/home/osboxes/git/nova/nova/api/validation/__init__.py", line 192, in wrapper' b' return func(*args, **kwargs)' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/servers.py", line 136, in detail' b' servers = self._get_servers(req, is_detail=True)' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/servers.py", line 330, in _get_servers' b' req, instance_list, cell_down_support=cell_down_support)' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 390, in detail' b' cell_down_support=cell_down_support)' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 425, in _list_view' b' for server in servers]' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 425, in <listcomp>' b' for server in servers]' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 222, in show' b' show_extra_specs),' b' File "/home/osboxes/git/nova/nova/api/openstack/compute/views/servers.py", line 494, in _get_flavor' b' instance_type = instance.get_flavor()' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1191, in get_flavor' b' return getattr(self, attr)' b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 67, in getter' b' self.obj_load_attr(name)' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1114, in obj_load_attr' b' self._obj_load_attr(attrname)' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1158, in _obj_load_attr' b' self._load_flavor()' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 967, in _load_flavor' b' self.flavor = instance.flavor' b' File "/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 67, in getter' b' self.obj_load_attr(name)' b' File "/home/osboxes/git/nova/nova/objects/instance.py", line 1101, in obj_load_attr' b' objtype=self.obj_name())' b'nova.exception.OrphanedObjectError: Cannot call obj_load_attr on orphaned Instance object' b'2019-04-16 15:08:53,722 INFO [nova.api.openstack.wsgi] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.' b"<class 'nova.exception.OrphanedObjectError'>" b'2019-04-16 15:08:53,723 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/6f70656e737461636b20342065766572/servers/detail?all_tenants=1&deleted=1" status: 500 len: 208 microversion: 2.1 time: 0.138964'
2021-07-20 11:08:05 Christian Rohmann bug added subscriber Christian Rohmann
2022-11-03 12:16:28 keirry bug added subscriber keirry