TestMultiCellMigrate.test_poll_unconfirmed_resizes_with_(no_)upcall race failing with greenlet error: Error auto-confirming resize: Reader <greenlet.greenlet object at 0x7f55c19dd048> to writer privilege escalation not allowed. Will retry later.

Bug #1858667 reported by Matt Riedemann
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

Seen here:

https://35ea41893426ced4e7b7-c7b305d9ddba843cc11c9f16befc26b2.ssl.cf1.rackcdn.com/699291/4/check/nova-tox-functional-py36/6f68741/testr_results.html.gz

ft1.12: nova.tests.functional.test_cross_cell_migrate.TestMultiCellMigrate.test_poll_unconfirmed_resizes_with_upcalltesttools.testresult.real._StringException: pythonlogging:'': {{{
2020-01-07 12:39:14,920 WARNING [placement.db_api] TransactionFactory already started, not reconfiguring.
2020-01-07 12:39:15,344 INFO [nova.service] Starting conductor node (version 20.1.0)
2020-01-07 12:39:15,379 INFO [nova.service] Starting scheduler node (version 20.1.0)
2020-01-07 12:39:15,425 INFO [nova.virt.driver] Loading compute driver 'fake.MediumFakeDriver'
2020-01-07 12:39:15,426 INFO [nova.service] Starting compute node (version 20.1.0)
2020-01-07 12:39:15,443 WARNING [nova.compute.manager] Compute node host1 not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
2020-01-07 12:39:15,447 INFO [nova.compute.manager] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
2020-01-07 12:39:15,457 WARNING [nova.compute.manager] No compute node record found for host host1. If this is the first time this service is starting on this host, then you can ignore this warning.
2020-01-07 12:39:15,462 WARNING [nova.compute.resource_tracker] No compute node record for host1:host1
2020-01-07 12:39:15,466 INFO [nova.compute.resource_tracker] Compute node record created for host1:host1 with uuid: 30117184-3985-48b6-8e7e-5611c45c112c
2020-01-07 12:39:15,509 INFO [placement.requestlog] 127.0.0.1 "GET /placement/resource_providers?in_tree=30117184-3985-48b6-8e7e-5611c45c112c" status: 200 len: 26 microversion: 1.14
2020-01-07 12:39:15,518 INFO [placement.requestlog] 127.0.0.1 "POST /placement/resource_providers" status: 200 len: 828 microversion: 1.20
2020-01-07 12:39:15,519 INFO [nova.scheduler.client.report] [req-6c8c5604-6d3d-49b4-ae66-3aadfeb1ff27] Created resource provider record via placement API for resource provider with UUID 30117184-3985-48b6-8e7e-5611c45c112c and name host1.
2020-01-07 12:39:15,536 INFO [placement.requestlog] 127.0.0.1 "PUT /placement/resource_providers/30117184-3985-48b6-8e7e-5611c45c112c/inventories" status: 200 len: 401 microversion: 1.26
2020-01-07 12:39:15,545 INFO [placement.requestlog] 127.0.0.1 "GET /placement/traits?name=in:COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW" status: 200 len: 268 microversion: 1.6
2020-01-07 12:39:15,561 INFO [placement.requestlog] 127.0.0.1 "PUT /placement/resource_providers/30117184-3985-48b6-8e7e-5611c45c112c/traits" status: 200 len: 303 microversion: 1.6
2020-01-07 12:39:15,589 INFO [nova.api.openstack.requestlog] 127.0.0.1 "POST /v2.1/os-aggregates" status: 200 len: 221 microversion: 2.81 time: 0.021327
2020-01-07 12:39:15,634 INFO [placement.requestlog] 127.0.0.1 "GET /placement/resource_providers?name=host1" status: 200 len: 432 microversion: 1.0
2020-01-07 12:39:15,643 INFO [placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/30117184-3985-48b6-8e7e-5611c45c112c/aggregates" status: 200 len: 53 microversion: 1.19
2020-01-07 12:39:15,655 INFO [placement.requestlog] 127.0.0.1 "PUT /placement/resource_providers/30117184-3985-48b6-8e7e-5611c45c112c/aggregates" status: 200 len: 91 microversion: 1.19
2020-01-07 12:39:15,665 INFO [nova.api.openstack.requestlog] 127.0.0.1 "POST /v2.1/os-aggregates/1/action" status: 200 len: 285 microversion: 2.81 time: 0.072452
2020-01-07 12:39:15,683 INFO [nova.virt.driver] Loading compute driver 'fake.MediumFakeDriver'
2020-01-07 12:39:15,684 INFO [nova.service] Starting compute node (version 20.1.0)
2020-01-07 12:39:15,703 WARNING [nova.compute.manager] Compute node host2 not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
2020-01-07 12:39:15,708 INFO [nova.compute.manager] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
2020-01-07 12:39:15,718 WARNING [nova.compute.manager] No compute node record found for host host2. If this is the first time this service is starting on this host, then you can ignore this warning.
2020-01-07 12:39:15,723 WARNING [nova.compute.resource_tracker] No compute node record for host2:host2
2020-01-07 12:39:15,732 INFO [nova.compute.resource_tracker] Compute node record created for host2:host2 with uuid: 7e192b8c-5187-467c-8e7e-f8360fbce19e
2020-01-07 12:39:15,774 INFO [placement.requestlog] 127.0.0.1 "GET /placement/resource_providers?in_tree=7e192b8c-5187-467c-8e7e-f8360fbce19e" status: 200 len: 26 microversion: 1.14
2020-01-07 12:39:15,782 INFO [placement.requestlog] 127.0.0.1 "POST /placement/resource_providers" status: 200 len: 828 microversion: 1.20
2020-01-07 12:39:15,783 INFO [nova.scheduler.client.report] [req-ddaf886c-9f05-4fa2-9c4d-b0317c78e47f] Created resource provider record via placement API for resource provider with UUID 7e192b8c-5187-467c-8e7e-f8360fbce19e and name host2.
2020-01-07 12:39:15,803 INFO [placement.requestlog] 127.0.0.1 "PUT /placement/resource_providers/7e192b8c-5187-467c-8e7e-f8360fbce19e/inventories" status: 200 len: 401 microversion: 1.26
2020-01-07 12:39:15,813 INFO [placement.requestlog] 127.0.0.1 "GET /placement/traits?name=in:COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW" status: 200 len: 268 microversion: 1.6
2020-01-07 12:39:15,831 INFO [placement.requestlog] 127.0.0.1 "PUT /placement/resource_providers/7e192b8c-5187-467c-8e7e-f8360fbce19e/traits" status: 200 len: 303 microversion: 1.6
2020-01-07 12:39:15,850 INFO [nova.api.openstack.requestlog] 127.0.0.1 "POST /v2.1/os-aggregates" status: 200 len: 221 microversion: 2.81 time: 0.012939
2020-01-07 12:39:15,889 INFO [placement.requestlog] 127.0.0.1 "GET /placement/resource_providers?name=host2" status: 200 len: 432 microversion: 1.0
2020-01-07 12:39:15,899 INFO [placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/7e192b8c-5187-467c-8e7e-f8360fbce19e/aggregates" status: 200 len: 53 microversion: 1.19
2020-01-07 12:39:15,912 INFO [placement.requestlog] 127.0.0.1 "PUT /placement/resource_providers/7e192b8c-5187-467c-8e7e-f8360fbce19e/aggregates" status: 200 len: 91 microversion: 1.19
2020-01-07 12:39:15,927 INFO [nova.api.openstack.requestlog] 127.0.0.1 "POST /v2.1/os-aggregates/2/action" status: 200 len: 285 microversion: 2.81 time: 0.072743
2020-01-07 12:39:15,947 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/flavors/detail" status: 200 len: 2605 microversion: 2.81 time: 0.016728
2020-01-07 12:39:16,012 INFO [nova.api.openstack.requestlog] 127.0.0.1 "POST /v2.1/servers" status: 202 len: 412 microversion: 2.81 time: 0.061068
2020-01-07 12:39:16,037 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 1774 microversion: 2.81 time: 0.020963
2020-01-07 12:39:16,141 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocation_candidates?limit=1000&required=%21COMPUTE_STATUS_DISABLED&resources=DISK_GB%3A1%2CMEMORY_MB%3A512%2CVCPU%3A1" status: 200 len: 1491 microversion: 1.34
2020-01-07 12:39:16,188 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocations/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 19 microversion: 1.28
2020-01-07 12:39:16,220 INFO [placement.requestlog] 127.0.0.1 "PUT /placement/allocations/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 204 len: 0 microversion: 1.34
2020-01-07 12:39:16,450 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocations/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 230 microversion: 1.28
2020-01-07 12:39:16,459 INFO [nova.compute.claims] Claim successful on node host1
2020-01-07 12:39:17,267 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 1852 microversion: 2.81 time: 0.715882
2020-01-07 12:39:17,324 INFO [nova.compute.manager] Took 0.00 seconds to spawn the instance on the hypervisor.
2020-01-07 12:39:17,378 INFO [nova.compute.manager] Took 0.94 seconds to build instance.
2020-01-07 12:39:17,423 INFO [nova.scheduler.host_manager] Received an update from an unknown host 'host1'. Re-created its InstanceList.
2020-01-07 12:39:17,858 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2049 microversion: 2.81 time: 0.085234
2020-01-07 12:39:17,888 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/limits" status: 200 len: 304 microversion: 2.81 time: 0.025623
2020-01-07 12:39:18,037 INFO [nova.api.openstack.requestlog] 127.0.0.1 "POST /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa/action" status: 202 len: 0 microversion: 2.81 time: 0.144296
2020-01-07 12:39:18,178 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2060 microversion: 2.81 time: 0.137591
2020-01-07 12:39:18,226 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocations/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 230 microversion: 1.28
2020-01-07 12:39:18,236 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocations/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 230 microversion: 1.28
2020-01-07 12:39:18,247 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocations/f6b8ee46-9d48-4baa-bea3-9cf8ab4e1337" status: 200 len: 19 microversion: 1.28
2020-01-07 12:39:18,309 INFO [placement.requestlog] 127.0.0.1 "POST /placement/allocations" status: 204 len: 0 microversion: 1.28
2020-01-07 12:39:18,363 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocation_candidates?limit=1000&required=%21COMPUTE_STATUS_DISABLED&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" status: 200 len: 1497 microversion: 1.34
2020-01-07 12:39:18,396 INFO [nova.scheduler.host_manager] Host filter ignoring hosts: host1
2020-01-07 12:39:18,406 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocations/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 19 microversion: 1.28
2020-01-07 12:39:18,433 INFO [placement.requestlog] 127.0.0.1 "PUT /placement/allocations/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 204 len: 0 microversion: 1.34
2020-01-07 12:39:18,443 INFO [nova.conductor.tasks.migrate] Executing cross-cell resize task starting with target host: host2
2020-01-07 12:39:18,611 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocations/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 232 microversion: 1.28
2020-01-07 12:39:18,620 INFO [nova.compute.claims] Claim successful on node host2
2020-01-07 12:39:18,636 INFO [nova.compute.resource_tracker] Updating resource usage from migration f6b8ee46-9d48-4baa-bea3-9cf8ab4e1337
2020-01-07 12:39:18,901 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2060 microversion: 2.81 time: 0.211517
2020-01-07 12:39:18,951 INFO [nova.compute.manager] Preparing for snapshot based resize on source host host1.
2020-01-07 12:39:19,146 INFO [nova.compute.manager] Finishing snapshot based resize on destination host host2.
2020-01-07 12:39:19,218 INFO [placement.requestlog] 127.0.0.1 "GET /placement/allocations/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 232 microversion: 1.28
2020-01-07 12:39:19,368 INFO [nova.scheduler.host_manager] Received an update from an unknown host 'host2'. Re-created its InstanceList.
2020-01-07 12:39:19,579 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2064 microversion: 2.81 time: 0.162860
2020-01-07 12:39:20,173 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2062 microversion: 2.81 time: 0.088806
2020-01-07 12:39:20,385 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/detail" status: 200 len: 2044 microversion: 2.81 time: 0.207618
2020-01-07 12:39:20,423 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/os-migrations?instance_uuid=e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 520 microversion: 2.81 time: 0.034696
2020-01-07 12:39:20,468 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa/os-instance-actions" status: 200 len: 645 microversion: 2.81 time: 0.040943
2020-01-07 12:39:20,517 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa/os-instance-actions/req-f6f0b421-4a08-43c8-9227-679b3ab8f8bc" status: 200 len: 1338 microversion: 2.81 time: 0.045893
2020-01-07 12:39:20,567 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa/os-instance-actions/req-0e5a96f5-1103-4878-8cb1-e12e458e37dc" status: 200 len: 607 microversion: 2.81 time: 0.045760
2020-01-07 12:39:20,595 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/os-hypervisors?hypervisor_hostname_pattern=host1" status: 200 len: 133 microversion: 2.81 time: 0.023762
2020-01-07 12:39:20,618 INFO [placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/30117184-3985-48b6-8e7e-5611c45c112c/allocations" status: 200 len: 152 microversion: 1.0
2020-01-07 12:39:20,655 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/os-hypervisors?hypervisor_hostname_pattern=host2" status: 200 len: 133 microversion: 2.81 time: 0.031607
2020-01-07 12:39:20,668 INFO [placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/7e192b8c-5187-467c-8e7e-f8360fbce19e/allocations" status: 200 len: 154 microversion: 1.0
2020-01-07 12:39:20,750 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa/os-interface" status: 200 len: 303 microversion: 2.81 time: 0.046221
2020-01-07 12:39:20,770 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/os-hypervisors/30117184-3985-48b6-8e7e-5611c45c112c" status: 200 len: 693 microversion: 2.53 time: 0.015973
2020-01-07 12:39:20,792 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/os-hypervisors/7e192b8c-5187-467c-8e7e-f8360fbce19e" status: 200 len: 694 microversion: 2.53 time: 0.018579
2020-01-07 12:39:20,822 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/limits" status: 200 len: 305 microversion: 2.81 time: 0.026690
2020-01-07 12:39:20,827 WARNING [nova.tests.unit.image.fake] Unable to find image id 3f72d534-d153-49cd-aba8-226207886b17. Have images: {'155d900f-4e14-4e4c-a73d-069cbf4541e6': {'id': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'name': 'fakeimage123456', 'created_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'updated_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'raw', 'disk_format': 'raw', 'size': '25165824', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': ['tag1', 'tag2'], 'properties': {'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel', 'architecture': 'x86_64'}}, 'a2459075-d96c-40d5-893e-577ff92e721c': {'id': 'a2459075-d96c-40d5-893e-577ff92e721c', 'name': 'fakeimage123456', 'created_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'updated_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': True, 'container_format': 'ami', 'disk_format': 'ami', 'size': '58145823', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': {'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel'}}, '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6': {'id': '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', 'name': 'fakeimage123456', 'created_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'updated_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': True, 'container_format': 'bare', 'disk_format': 'raw', 'size': '83594576', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': ['tag3', 'tag4'], 'properties': {'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel', 'architecture': 'x86_64'}}, 'cedef40a-ed67-4d10-800e-17455edce175': {'id': 'cedef40a-ed67-4d10-800e-17455edce175', 'name': 'fakeimage123456', 'created_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'updated_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': True, 'container_format': 'ami', 'disk_format': 'ami', 'size': '84035174', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': {'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel'}}, 'c905cedb-7281-47e4-8a62-f26bc5fc4c77': {'id': 'c905cedb-7281-47e4-8a62-f26bc5fc4c77', 'name': 'fakeimage123456', 'created_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'updated_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': True, 'container_format': 'ami', 'disk_format': 'ami', 'size': '26360814', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': {'kernel_id': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'ramdisk_id': None}}, 'a440c04b-79fa-479c-bed1-0b816eaec379': {'id': 'a440c04b-79fa-479c-bed1-0b816eaec379', 'name': 'fakeimage6', 'created_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'updated_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'ova', 'disk_format': 'vhd', 'size': '49163826', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': {'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel', 'architecture': 'x86_64', 'auto_disk_config': 'False'}}, '70a599e0-31e7-49b7-b260-868f441e862b': {'id': '70a599e0-31e7-49b7-b260-868f441e862b', 'name': 'fakeimage7', 'created_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'updated_at': datetime.datetime(2011, 1, 1, 1, 2, 3), 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'ova', 'disk_format': 'vhd', 'size': '74185822', 'min_ram': 0, 'min_disk': 0, 'protected': False, 'visibility': 'public', 'tags': [], 'properties': {'kernel_id': 'nokernel', 'ramdisk_id': 'nokernel', 'architecture': 'x86_64', 'auto_disk_config': 'True'}}}
2020-01-07 12:39:20,828 INFO [nova.api.openstack.wsgi] HTTP exception thrown: Image not found.
2020-01-07 12:39:20,829 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/images/3f72d534-d153-49cd-aba8-226207886b17" status: 404 len: 62 microversion: 2.1 time: 0.002381
2020-01-07 12:39:20,835 INFO [nova.compute.manager] Found 1 unconfirmed migrations older than 1 seconds
2020-01-07 12:39:20,835 INFO [nova.compute.manager] Automatically confirming migration 1 for instance e921fb5c-de9f-4f5e-a420-0e12fa4c99aa
2020-01-07 12:39:20,876 INFO [nova.compute.manager] Error auto-confirming resize: Reader <greenlet.greenlet object at 0x7f55c19dd048> to writer privilege escalation not allowed. Will retry later.
2020-01-07 12:39:20,964 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.084149
2020-01-07 12:39:21,613 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.141117
2020-01-07 12:39:22,211 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.092270
2020-01-07 12:39:22,812 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.094725
2020-01-07 12:39:23,446 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.128086
2020-01-07 12:39:24,076 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.122928
2020-01-07 12:39:24,691 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.108408
2020-01-07 12:39:25,311 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.111924
2020-01-07 12:39:25,917 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.101655
2020-01-07 12:39:26,519 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa" status: 200 len: 2067 microversion: 2.81 time: 0.092556
2020-01-07 12:39:26,721 WARNING [nova.service] Service killed that has no database entry
2020-01-07 12:39:26,723 INFO [oslo_db.sqlalchemy.provision] DROP ALL OBJECTS, BACKEND sqlite://
2020-01-07 12:39:26,750 INFO [oslo_db.sqlalchemy.provision] DISPOSE ENGINE Engine(sqlite://)
}}}

stderr: {{{
/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_policy/policy.py:706: UserWarning: Policy "os_compute_api:os-services":"rule:admin_api" was deprecated in 20.0.0 in favor of "os_compute_api:os-services:list":"rule:system_reader_api". Reason:
Since Ussuri release, nova API policies are introducing new default roles
with scope_type capabilities. These new changes improve the security level
and manageability. New policies are more rich in term of handling access
at system and project level token with read, write roles.
Start using the new policies and enable the scope checks via config option
``nova.conf [oslo_policy] enforce_scope=True`` which is False by default.
Old policies are marked as deprecated and silently going to be ignored
in nova 23.0.0 (OpenStack W) release
. Either ensure your deployment is ready for the new default or copy/paste the deprecated policy into your policy file and maintain it manually.
  warnings.warn(deprecated_msg)
/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_policy/policy.py:706: UserWarning: Policy "os_compute_api:os-services":"rule:admin_api" was deprecated in 20.0.0 in favor of "os_compute_api:os-services:update":"rule:system_admin_api". Reason:
Since Ussuri release, nova API policies are introducing new default roles
with scope_type capabilities. These new changes improve the security level
and manageability. New policies are more rich in term of handling access
at system and project level token with read, write roles.
Start using the new policies and enable the scope checks via config option
``nova.conf [oslo_policy] enforce_scope=True`` which is False by default.
Old policies are marked as deprecated and silently going to be ignored
in nova 23.0.0 (OpenStack W) release
. Either ensure your deployment is ready for the new default or copy/paste the deprecated policy into your policy file and maintain it manually.
  warnings.warn(deprecated_msg)
/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_policy/policy.py:706: UserWarning: Policy "os_compute_api:os-services":"rule:admin_api" was deprecated in 20.0.0 in favor of "os_compute_api:os-services:delete":"rule:system_admin_api". Reason:
Since Ussuri release, nova API policies are introducing new default roles
with scope_type capabilities. These new changes improve the security level
and manageability. New policies are more rich in term of handling access
at system and project level token with read, write roles.
Start using the new policies and enable the scope checks via config option
``nova.conf [oslo_policy] enforce_scope=True`` which is False by default.
Old policies are marked as deprecated and silently going to be ignored
in nova 23.0.0 (OpenStack W) release
. Either ensure your deployment is ready for the new default or copy/paste the deprecated policy into your policy file and maintain it manually.
  warnings.warn(deprecated_msg)
}}}

Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/nova/nova/tests/functional/test_cross_cell_migrate.py", line 994, in test_poll_unconfirmed_resizes_with_upcall
    self._wait_for_state_change(server, 'ACTIVE')
  File "/home/zuul/src/opendev.org/openstack/nova/nova/tests/functional/integrated_helpers.py", line 98, in _wait_for_state_change
    server, {'status': expected_status}, max_retries)
  File "/home/zuul/src/opendev.org/openstack/nova/nova/tests/functional/integrated_helpers.py", line 91, in _wait_for_server_parameter
    expected_params, server))
  File "/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/unittest2/case.py", line 690, in fail
    raise self.failureException(msg)
AssertionError: Wait for state change failed, expected_params={'status': 'ACTIVE'}, server={'id': 'e921fb5c-de9f-4f5e-a420-0e12fa4c99aa', 'name': 'test_cross_cell_resize', 'status': 'VERIFY_RESIZE', 'tenant_id': '6f70656e737461636b20342065766572', 'user_id': 'admin', 'metadata': {}, 'hostId': 'ad32f82f3048cef557b4ec2f5f3e952883df66e7dbb37f3f6c1352f2', 'image': {'id': '70a599e0-31e7-49b7-b260-868f441e862b', 'links': [{'rel': 'bookmark', 'href': 'http://bf1b95bb-0630-4eb8-bdee-fb96cef2c5ee/images/70a599e0-31e7-49b7-b260-868f441e862b'}]}, 'flavor': {'vcpus': 1, 'ram': 2048, 'disk': 20, 'ephemeral': 0, 'swap': 0, 'original_name': 'm1.small', 'extra_specs': {}}, 'created': '2020-01-07T12:39:15Z', 'updated': '2020-01-07T12:39:19Z', 'addresses': {'private': [{'version': 4, 'addr': '192.168.1.3', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:4c:2c:30'}]}, 'accessIPv4': '', 'accessIPv6': '', 'links': [{'rel': 'self', 'href': 'http://bf1b95bb-0630-4eb8-bdee-fb96cef2c5ee/v2.1/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa'}, {'rel': 'bookmark', 'href': 'http://bf1b95bb-0630-4eb8-bdee-fb96cef2c5ee/servers/e921fb5c-de9f-4f5e-a420-0e12fa4c99aa'}], 'OS-DCF:diskConfig': 'MANUAL', 'progress': 0, 'OS-EXT-AZ:availability_zone': 'cell2', 'config_drive': '', 'key_name': None, 'OS-SRV-USG:launched_at': '2020-01-07T12:39:19.232514', 'OS-SRV-USG:terminated_at': None, 'security_groups': [{'name': 'default'}], 'OS-EXT-SRV-ATTR:host': 'host2', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:hypervisor_hostname': 'host2', 'OS-EXT-SRV-ATTR:reservation_id': 'r-1ub2ah6i', 'OS-EXT-SRV-ATTR:launch_index': 0, 'OS-EXT-SRV-ATTR:hostname': 'test-cross-cell-resize', 'OS-EXT-SRV-ATTR:kernel_id': '', 'OS-EXT-SRV-ATTR:ramdisk_id': '', 'OS-EXT-SRV-ATTR:root_device_name': '/dev/sda', 'OS-EXT-SRV-ATTR:user_data': None, 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': 'resized', 'OS-EXT-STS:power_state': 1, 'os-extended-volumes:volumes_attached': [], 'host_status': 'UNKNOWN', 'locked': False, 'locked_reason': None, 'description': None, 'tags': ['test'], 'trusted_image_certificates': None, 'server_groups': []}

Looks like it just recently showed up:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22%5Bnova.compute.manager%5D%20Error%20auto-confirming%20resize%3A%20Reader%20%3Cgreenlet.greenlet%5C%22%20AND%20message%3A%5C%22to%20writer%20privilege%20escalation%20not%20allowed.%20Will%20retry%20later.%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d

Though the test has been merged for awhile:

https://review.opendev.org/#/c/698322/

Revision history for this message
Lee Yarwood (lyarwood) wrote :

Hit this in https://review.opendev.org/#/c/699291/ with the following trace once LOG.info is switched to LOG.exception:

2020-07-15 12:55:04,294 ERROR [nova.compute.manager] Error auto-confirming resize: %s. Will retry later.
Traceback (most recent call last):
  File "/home/lyarwood/redhat/devel/src/openstack/nova/nova/compute/manager.py", line 9095, in _poll_unconfirmed_resizes
    self.compute_api.confirm_resize(context, instance,
  File "/home/lyarwood/redhat/devel/src/openstack/nova/nova/compute/api.py", line 223, in inner
    return function(self, context, instance, *args, **kwargs)
  File "/home/lyarwood/redhat/devel/src/openstack/nova/nova/compute/api.py", line 150, in inner
    return f(self, context, instance, *args, **kw)
  File "/home/lyarwood/redhat/devel/src/openstack/nova/nova/compute/api.py", line 3757, in confirm_resize
    source_svc = self._get_source_compute_service(context, migration)
  File "/home/lyarwood/redhat/devel/src/openstack/nova/nova/compute/api.py", line 3717, in _get_source_compute_service
    with nova_context.target_cell(context, hm.cell_mapping) as cctxt:
  File "/usr/lib64/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/home/lyarwood/redhat/devel/src/openstack/nova/nova/tests/fixtures.py", line 455, in _wrap_target_cell
    with self._cell_lock.write_lock():
  File "/usr/lib64/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/home/lyarwood/redhat/devel/src/openstack/nova/.tox/functional/lib/python3.8/site-packages/fasteners/lock.py", line 231, in write_lock
    raise RuntimeError("Reader %s to writer privilege"
RuntimeError: Reader <greenlet.greenlet object at 0x7f3b2afb8040> to writer privilege escalation not allowed

Revision history for this message
Lee Yarwood (lyarwood) wrote :

https://github.com/openstack/nova/blob/1fa6799e4171d7c93f4d6330f2437891733d7d27/nova/tests/fixtures.py#L403-L483

^ is essentially doing the following:

>>> import fasteners
>>> lock = fasteners.ReaderWriterLock()
>>> with lock.read_lock():
... with lock.write_lock():
... pass
...
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "/usr/lib64/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/home/lyarwood/redhat/devel/src/openstack/nova/.tox/functional/lib/python3.8/site-packages/fasteners/lock.py", line 231, in write_lock
    raise RuntimeError("Reader %s to writer privilege"
RuntimeError: Reader <_MainThread(MainThread, started 139834271795008)> to writer privilege escalation not allowed

While https://github.com/harlowja/fasteners/commit/d67b9d08ddcf906240e5d3f2b0b118a7254b4390 suggests only the following is possible:

>>> with lock.write_lock():
... with lock.read_lock():
... with lock.write_lock():
... pass
...

Revision history for this message
Lee Yarwood (lyarwood) wrote :

^ ignore this, we aren't calling locks serially in this way.

The issue I was hitting was due to both the test [1] and the periodic it was calling [2] making their own calls to target_cell against hosts in different cells. I've mocked out the second call in the periodic to workaround this.

[1] https://github.com/openstack/nova/blob/1fa6799e4171d7c93f4d6330f2437891733d7d27/nova/tests/functional/test_cross_cell_migrate.py#L988-L991 (destination host)
[2] https://review.opendev.org/#/c/699291/7/nova/compute/api.py@3715 (source host)

Revision history for this message
Lee Yarwood (lyarwood) wrote :

Okay so this bug was reported against the same change that I've been working on and I no longer see the issue now that I've mocked out the second call to target_cell. Closing this out.

Changed in nova:
status: Confirmed → Invalid
Revision history for this message
melanie witt (melwitt) wrote :

For posterity, going to include why this happened, as discussed in #openstack-nova earlier today [1]:

In the nova-compute _poll_unconfirmed_resizes periodic task, we are calling a nova.compute.api.API method, confirm_resize [2].

Typically, in nova-api we call set_target_cell to target a RequestContext for the duration of the request. In "real life" this lasts until until the RPC boundary, where it is then lost. Then, in nova-compute, we need not target to a cell because nova-compute is local to a cell.

In the func test environment, we don't have real RPC, so when something is targeted in nova-api by set_target_cell, the CheatingSerializer used in the test fixtures will carry over the targeted RequestContext from nova-api -> nova-compute, unlike "real life". We do this so that nova-compute in a func test accesses the correct cell database that is normally "local" in real life.

This normally works fine and when we call periodic tasks manually in func tests, we use target_cell to access the correct cell database because we're not local to a cell like "real life".

The problem arose because in the specific proposed patch, it is adding a target_cell call to the nova.compute.api.API.confirm_resize method [3]. So when the periodic task in the func test is manually called with target_cell, we get two nested calls to target_cell, the second of which causes the exception to be raised.

As discussed in #openstack-nova [1], probably the right thing to do here longer term is create a new inner method that nova-compute specific code always calls (instead of the nova.compute.api.API method), and then nova.compute.api.API can just target to a cell and call the inner method.

[1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-07-15.log.html#t2020-07-15T15:31:08
[2] https://github.com/openstack/nova/blob/1fa6799e4171d7c93f4d6330f2437891733d7d27/nova/compute/manager.py#L9093
[3] https://review.opendev.org/#/c/699291/8/nova/compute/api.py@3715

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.