After a bare metal instance creation is started, an InventoryInUse exception is logged as ERROR with stack trace in the log of n-cpu.
Ironic virt driver returns an empty inventory for a node which is allocated[1]. Due to this, the resource tracker tried to delete this inventory, then it causes a conflict error because the resource provider for the ironic node is allocated. A waning message used to be logged for this conflict error[2]. After the recent change[3], an InventoryInUse exception is raised[4]. Then, this exception is logged as ERROR[5].
Feb 22 11:13:08.848696 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: DEBUG nova.virt.ironic.driver [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] Node 42ae69bd-c860-4eaa-8a36-fdee78425714 is not ready for a deployment, reporting an empty inventory for it. Node's provision state is deploying, power state is power off and maintenance is False. {{(pid=14029) get_inventory /opt/stack/new/nova/nova/virt/ironic/driver.py:752}}
Feb 22 11:13:08.956620 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: DEBUG nova.scheduler.client.report [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] Refreshing aggregate associations for resource provider fdd77c1d-5b1f-4a9a-b168-0fa93362b95d, aggregates: None {{(pid=14029) _refresh_associations /opt/stack/new/nova/nova/scheduler/client/report.py:773}}
Feb 22 11:13:08.977097 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: DEBUG nova.virt.ironic.driver [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] The flavor extra_specs for Ironic instance 803d864c-542e-4bb4-a89a-38da01cb8409 have been updated for custom resource class 'baremetal'. {{(pid=14029) _pike_flavor_migration /opt/stack/new/nova/nova/virt/ironic/driver.py:545}}
Feb 22 11:13:08.994233 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: DEBUG nova.scheduler.client.report [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] Refreshing trait associations for resource provider fdd77c1d-5b1f-4a9a-b168-0fa93362b95d, traits: None {{(pid=14029) _refresh_associations /opt/stack/new/nova/nova/scheduler/client/report.py:784}}
Feb 22 11:13:09.058940 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: INFO nova.scheduler.client.report [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] [req-55086c0b-9068-49fb-ae94-cd870ab96cab] Inventory update conflict for fdd77c1d-5b1f-4a9a-b168-0fa93362b95d with generation ID 2
Feb 22 11:13:09.059437 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: DEBUG oslo_concurrency.lockutils [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] Lock "compute_resources" released by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.249s {{(pid=14029) inner /usr/local/lib/python3.5/dist-packages/oslo_concurrency/lockutils.py:285}}
Feb 22 11:13:09.062075 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] Error updating resources for node 42ae69bd-c860-4eaa-8a36-fdee78425714.: nova.exception.InventoryInUse: Inventory for ''CUSTOM_BAREMETAL'' on resource provider 'fdd77c1d-5b1f-4a9a-b168-0fa93362b95d' in use.
Feb 22 11:13:09.062229 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager Traceback (most recent call last):
Feb 22 11:13:09.062343 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/manager.py", line 7245, in update_available_resource_for_node
Feb 22 11:13:09.062454 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager rt.update_available_resource(context, nodename)
Feb 22 11:13:09.062562 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 680, in update_available_resource
Feb 22 11:13:09.062669 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager self._update_available_resource(context, resources)
Feb 22 11:13:09.062781 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/usr/local/lib/python3.5/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
Feb 22 11:13:09.062896 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager return f(*args, **kwargs)
Feb 22 11:13:09.063002 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 704, in _update_available_resource
Feb 22 11:13:09.063107 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager self._init_compute_node(context, resources)
Feb 22 11:13:09.063228 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 561, in _init_compute_node
Feb 22 11:13:09.063342 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager self._update(context, cn)
Feb 22 11:13:09.063453 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 878, in _update
Feb 22 11:13:09.063557 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager inv_data,
Feb 22 11:13:09.063720 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 68, in set_inventory_for_provider
Feb 22 11:13:09.063826 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid,
Feb 22 11:13:09.063939 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 37, in __run_method
Feb 22 11:13:09.064043 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs)
Feb 22 11:13:09.064176 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 1015, in set_inventory_for_provider
Feb 22 11:13:09.064298 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager self._update_inventory(context, rp_uuid, inv_data)
Feb 22 11:13:09.064414 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 66, in wrapper
Feb 22 11:13:09.064516 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager return f(self, *a, **k)
Feb 22 11:13:09.064626 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 945, in _update_inventory
Feb 22 11:13:09.064742 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager if self._update_inventory_attempt(context, rp_uuid, inv_data):
Feb 22 11:13:09.064860 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 888, in _update_inventory_attempt
Feb 22 11:13:09.064986 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager resource_provider=rp_uuid,
Feb 22 11:13:09.065114 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager nova.exception.InventoryInUse: Inventory for ''CUSTOM_BAREMETAL'' on resource provider 'fdd77c1d-5b1f-4a9a-b168-0fa93362b95d' in use.
Feb 22 11:13:09.065248 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: ERROR nova.compute.manager
After a bare metal instance creation is started, an InventoryInUse exception is logged as ERROR with stack trace in the log of n-cpu.
Ironic virt driver returns an empty inventory for a node which is allocated[1]. Due to this, the resource tracker tried to delete this inventory, then it causes a conflict error because the resource provider for the ironic node is allocated. A waning message used to be logged for this conflict error[2]. After the recent change[3], an InventoryInUse exception is raised[4]. Then, this exception is logged as ERROR[5].
[1] https:/ /github. com/openstack/ nova/blob/ 26c593c91f008ca ab92ed52156dfe2 d898955d3f/ nova/virt/ ironic/ driver. py#L780 /github. com/openstack/ nova/commit/ 26c593c91f008ca ab92ed52156dfe2 d898955d3f# diff-94f87e728d f6465becce5241f 3da53c8L994 /github. com/openstack/ nova/commit/ 26c593c91f008ca ab92ed52156dfe2 d898955d3f# diff-94f87e728d f6465becce5241f 3da53c8 /github. com/openstack/ nova/blob/ 26c593c91f008ca ab92ed52156dfe2 d898955d3f/ nova/scheduler/ client/ report. py#L878 /github. com/openstack/ nova/blob/ 26c593c91f008ca ab92ed52156dfe2 d898955d3f/ nova/compute/ manager. py#L7244
[2] https:/
[3] https:/
[4] https:/
[5] https:/
-----
The following log is from an ironic job[6].
[6] http:// logs.openstack. org/19/ 546919/ 2/check/ ironic- tempest- dsvm-ipa- partition- pxe_ipmitool- tinyipa- python3/ 2737ab0/ logs/screen- n-cpu.txt. gz?level= DEBUG#_ Feb_22_ 11_13_08_ 848696
Feb 22 11:13:08.848696 ubuntu- xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: DEBUG nova.virt. ironic. driver [None req-10cd394d- b1be-4541- 85ed-ff2275343f b5 service nova] Node 42ae69bd- c860-4eaa- 8a36-fdee784257 14 is not ready for a deployment, reporting an empty inventory for it. Node's provision state is deploying, power state is power off and maintenance is False. {{(pid=14029) get_inventory /opt/stack/ new/nova/ nova/virt/ ironic/ driver. py:752} } xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: DEBUG nova.scheduler. client. report [None req-10cd394d- b1be-4541- 85ed-ff2275343f b5 service nova] Refreshing aggregate associations for resource provider fdd77c1d- 5b1f-4a9a- b168-0fa93362b9 5d, aggregates: None {{(pid=14029) _refresh_ associations /opt/stack/ new/nova/ nova/scheduler/ client/ report. py:773} } xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: DEBUG nova.virt. ironic. driver [None req-10cd394d- b1be-4541- 85ed-ff2275343f b5 service nova] The flavor extra_specs for Ironic instance 803d864c- 542e-4bb4- a89a-38da01cb84 09 have been updated for custom resource class 'baremetal'. {{(pid=14029) _pike_flavor_ migration /opt/stack/ new/nova/ nova/virt/ ironic/ driver. py:545} } xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: DEBUG nova.scheduler. client. report [None req-10cd394d- b1be-4541- 85ed-ff2275343f b5 service nova] Refreshing trait associations for resource provider fdd77c1d- 5b1f-4a9a- b168-0fa93362b9 5d, traits: None {{(pid=14029) _refresh_ associations /opt/stack/ new/nova/ nova/scheduler/ client/ report. py:784} } xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: INFO nova.scheduler. client. report [None req-10cd394d- b1be-4541- 85ed-ff2275343f b5 service nova] [req-55086c0b- 9068-49fb- ae94-cd870ab96c ab] Inventory update conflict for fdd77c1d- 5b1f-4a9a- b168-0fa93362b9 5d with generation ID 2 xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: DEBUG oslo_concurrenc y.lockutils [None req-10cd394d- b1be-4541- 85ed-ff2275343f b5 service nova] Lock "compute_resources" released by "nova.compute. resource_ tracker. ResourceTracker ._update_ available_ resource" :: held 0.249s {{(pid=14029) inner /usr/local/ lib/python3. 5/dist- packages/ oslo_concurrenc y/lockutils. py:285} } xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager [None req-10cd394d- b1be-4541- 85ed-ff2275343f b5 service nova] Error updating resources for node 42ae69bd- c860-4eaa- 8a36-fdee784257 14.: nova.exception. InventoryInUse: Inventory for ''CUSTOM_ BAREMETAL' ' on resource provider 'fdd77c1d- 5b1f-4a9a- b168-0fa93362b9 5d' in use. xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager Traceback (most recent call last): xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/compute/ manager. py", line 7245, in update_ available_ resource_ for_node xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager rt.update_ available_ resource( context, nodename) xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/compute/ resource_ tracker. py", line 680, in update_ available_ resource xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager self._update_ available_ resource( context, resources) xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/usr/local/ lib/python3. 5/dist- packages/ oslo_concurrenc y/lockutils. py", line 274, in inner xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager return f(*args, **kwargs) xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/compute/ resource_ tracker. py", line 704, in _update_ available_ resource xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager self._init_ compute_ node(context, resources) xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/compute/ resource_ tracker. py", line 561, in _init_compute_node xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager self._update( context, cn) xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/compute/ resource_ tracker. py", line 878, in _update xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager inv_data, xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/scheduler/ client/ __init_ _.py", line 68, in set_inventory_ for_provider xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager parent_ provider_ uuid=parent_ provider_ uuid, xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/scheduler/ client/ __init_ _.py", line 37, in __run_method xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager return getattr( self.instance, __name)(*args, **kwargs) xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/scheduler/ client/ report. py", line 1015, in set_inventory_ for_provider xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager self._update_ inventory( context, rp_uuid, inv_data) xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/scheduler/ client/ report. py", line 66, in wrapper xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager return f(self, *a, **k) xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/scheduler/ client/ report. py", line 945, in _update_inventory xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager if self._update_ inventory_ attempt( context, rp_uuid, inv_data): xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager File "/opt/stack/ new/nova/ nova/scheduler/ client/ report. py", line 888, in _update_ inventory_ attempt xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager resource_ provider= rp_uuid, xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager nova.exception. InventoryInUse: Inventory for ''CUSTOM_ BAREMETAL' ' on resource provider 'fdd77c1d- 5b1f-4a9a- b168-0fa93362b9 5d' in use. xenial- ovh-bhs1- 0002670096 nova-compute[ 14029]: ERROR nova.compute. manager
Feb 22 11:13:08.956620 ubuntu-
Feb 22 11:13:08.977097 ubuntu-
Feb 22 11:13:08.994233 ubuntu-
Feb 22 11:13:09.058940 ubuntu-
Feb 22 11:13:09.059437 ubuntu-
Feb 22 11:13:09.062075 ubuntu-
Feb 22 11:13:09.062229 ubuntu-
Feb 22 11:13:09.062343 ubuntu-
Feb 22 11:13:09.062454 ubuntu-
Feb 22 11:13:09.062562 ubuntu-
Feb 22 11:13:09.062669 ubuntu-
Feb 22 11:13:09.062781 ubuntu-
Feb 22 11:13:09.062896 ubuntu-
Feb 22 11:13:09.063002 ubuntu-
Feb 22 11:13:09.063107 ubuntu-
Feb 22 11:13:09.063228 ubuntu-
Feb 22 11:13:09.063342 ubuntu-
Feb 22 11:13:09.063453 ubuntu-
Feb 22 11:13:09.063557 ubuntu-
Feb 22 11:13:09.063720 ubuntu-
Feb 22 11:13:09.063826 ubuntu-
Feb 22 11:13:09.063939 ubuntu-
Feb 22 11:13:09.064043 ubuntu-
Feb 22 11:13:09.064176 ubuntu-
Feb 22 11:13:09.064298 ubuntu-
Feb 22 11:13:09.064414 ubuntu-
Feb 22 11:13:09.064516 ubuntu-
Feb 22 11:13:09.064626 ubuntu-
Feb 22 11:13:09.064742 ubuntu-
Feb 22 11:13:09.064860 ubuntu-
Feb 22 11:13:09.064986 ubuntu-
Feb 22 11:13:09.065114 ubuntu-
Feb 22 11:13:09.065248 ubuntu-