I'm seeing something similar for the Ironic host manager here:
http://logs.openstack.org/20/407220/2/check/gate-tempest-dsvm-ironic-ipa-wholedisk-agent_ipmitool-tinyipa-multinode-ubuntu-xenial-nv/0779f94/logs/screen-n-sch.txt.gz#_2017-01-04_19_05_16_398
2017-01-04 19:05:16.398 20709 DEBUG oslo_concurrency.lockutils [req-6ce318a2-53e5-4b80-84a5-f896ab48d627 tempest-BaremetalMultitenancy-1820943769 tempest-BaremetalMultitenancy-1820943769] Lock "(u'ubuntu-xenial-2-node-rax-ord-6478210', u'21da4933-a128-45f6-a765-7e6bc071e0f3')" released by "nova.scheduler.host_manager._locked_update" :: held 0.002s inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282 2017-01-04 19:05:16.398 20709 DEBUG oslo_concurrency.lockutils [req-6ce318a2-53e5-4b80-84a5-f896ab48d627 tempest-BaremetalMultitenancy-1820943769 tempest-BaremetalMultitenancy-1820943769] Lock "(u'ubuntu-xenial-2-node-rax-ord-6478210-359401', u'00d62acd-6f3b-4cb8-9668-12517c84b3b9')" acquired by "nova.scheduler.host_manager._locked_update" :: waited 0.000s inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270 2017-01-04 19:05:16.398 20709 DEBUG nova.scheduler.host_manager [req-6ce318a2-53e5-4b80-84a5-f896ab48d627 tempest-BaremetalMultitenancy-1820943769 tempest-BaremetalMultitenancy-1820943769] Update host state from compute node: ComputeNode(cpu_allocation_ratio=16.0,cpu_info='',created_at=2017-01-04T19:02:54Z,current_workload=None,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=10,free_disk_gb=None,free_ram_mb=None,host='ubuntu-xenial-2-node-rax-ord-6478210-359401',host_ip=10.210.37.31,hypervisor_hostname='00d62acd-6f3b-4cb8-9668-12517c84b3b9',hypervisor_type='ironic',hypervisor_version=1,id=8,local_gb=10,local_gb_used=0,memory_mb=384,memory_mb_used=0,metrics=None,numa_topology=None,pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.0,running_vms=None,service_id=None,stats={cpu_arch='x86_64'},supported_hv_specs=[HVSpec],updated_at=2017-01-04T19:04:51Z,uuid=521cf775-5a16-4111-bf26-c30fb6725716,vcpus=1,vcpus_used=0) _locked_update /opt/stack/new/nova/nova/scheduler/host_manager.py:168 2017-01-04 19:05:16.400 20709 DEBUG oslo_concurrency.lockutils [req-6ce318a2-53e5-4b80-84a5-f896ab48d627 tempest-BaremetalMultitenancy-1820943769 tempest-BaremetalMultitenancy-1820943769] Lock "(u'ubuntu-xenial-2-node-rax-ord-6478210-359401', u'00d62acd-6f3b-4cb8-9668-12517c84b3b9')" released by "nova.scheduler.host_manager._locked_update" :: held 0.002s inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server [req-6ce318a2-53e5-4b80-84a5-f896ab48d627 tempest-BaremetalMultitenancy-1820943769 tempest-BaremetalMultitenancy-1820943769] Exception during message handling 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 155, in _process_incoming 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 222, in dispatch 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 192, in _do_dispatch 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 218, in inner 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server return func(*args, **kwargs) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/opt/stack/new/nova/nova/scheduler/manager.py", line 84, in select_destinations 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server dests = self.driver.select_destinations(ctxt, spec_obj) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 51, in select_destinations 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server selected_hosts = self._schedule(context, spec_obj) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 96, in _schedule 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server hosts = self._get_all_host_states(elevated) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 134, in _get_all_host_states 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server return self.host_manager.get_all_host_states(context) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/opt/stack/new/nova/nova/scheduler/host_manager.py", line 610, in get_all_host_states 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server self._get_instance_info(context, compute)) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/opt/stack/new/nova/nova/scheduler/host_manager.py", line 180, in update 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server return _locked_update(self, compute, service, aggregates, inst_dict) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 271, in inner 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/opt/stack/new/nova/nova/scheduler/host_manager.py", line 169, in _locked_update 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server self._update_from_compute_node(compute) 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server File "/opt/stack/new/nova/nova/scheduler/ironic_host_manager.py", line 46, in _update_from_compute_node 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server self.free_disk_mb = compute.free_disk_gb * 1024 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server TypeError: unsupported operand type(s) for *: 'NoneType' and 'int' 2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server
We might be hitting a race where the compute node record is created by the resource tracker before it's resources are updated.
I'm seeing something similar for the Ironic host manager here:
http:// logs.openstack. org/20/ 407220/ 2/check/ gate-tempest- dsvm-ironic- ipa-wholedisk- agent_ipmitool- tinyipa- multinode- ubuntu- xenial- nv/0779f94/ logs/screen- n-sch.txt. gz#_2017- 01-04_19_ 05_16_398
2017-01-04 19:05:16.398 20709 DEBUG oslo_concurrenc y.lockutils [req-6ce318a2- 53e5-4b80- 84a5-f896ab48d6 27 tempest- BaremetalMultit enancy- 1820943769 tempest- BaremetalMultit enancy- 1820943769] Lock "(u'ubuntu- xenial- 2-node- rax-ord- 6478210' , u'21da4933- a128-45f6- a765-7e6bc071e0 f3')" released by "nova.scheduler .host_manager. _locked_ update" :: held 0.002s inner /usr/local/ lib/python2. 7/dist- packages/ oslo_concurrenc y/lockutils. py:282 y.lockutils [req-6ce318a2- 53e5-4b80- 84a5-f896ab48d6 27 tempest- BaremetalMultit enancy- 1820943769 tempest- BaremetalMultit enancy- 1820943769] Lock "(u'ubuntu- xenial- 2-node- rax-ord- 6478210- 359401' , u'00d62acd- 6f3b-4cb8- 9668-12517c84b3 b9')" acquired by "nova.scheduler .host_manager. _locked_ update" :: waited 0.000s inner /usr/local/ lib/python2. 7/dist- packages/ oslo_concurrenc y/lockutils. py:270 host_manager [req-6ce318a2- 53e5-4b80- 84a5-f896ab48d6 27 tempest- BaremetalMultit enancy- 1820943769 tempest- BaremetalMultit enancy- 1820943769] Update host state from compute node: ComputeNode( cpu_allocation_ ratio=16. 0,cpu_info= '',created_ at=2017- 01-04T19: 02:54Z, current_ workload= None,deleted= False,deleted_ at=None, disk_allocation _ratio= 1.0,disk_ available_ least=10, free_disk_ gb=None, free_ram_ mb=None, host='ubuntu- xenial- 2-node- rax-ord- 6478210- 359401' ,host_ip= 10.210. 37.31,hyperviso r_hostname= '00d62acd- 6f3b-4cb8- 9668-12517c84b3 b9',hypervisor_ type='ironic' ,hypervisor_ version= 1,id=8, local_gb= 10,local_ gb_used= 0,memory_ mb=384, memory_ mb_used= 0,metrics= None,numa_ topology= None,pci_ device_ pools=PciDevice PoolList, ram_allocation_ ratio=1. 0,running_ vms=None, service_ id=None, stats={ cpu_arch= 'x86_64' },supported_ hv_specs= [HVSpec] ,updated_ at=2017- 01-04T19: 04:51Z, uuid=521cf775- 5a16-4111- bf26-c30fb67257 16,vcpus= 1,vcpus_ used=0) _locked_update /opt/stack/ new/nova/ nova/scheduler/ host_manager. py:168 y.lockutils [req-6ce318a2- 53e5-4b80- 84a5-f896ab48d6 27 tempest- BaremetalMultit enancy- 1820943769 tempest- BaremetalMultit enancy- 1820943769] Lock "(u'ubuntu- xenial- 2-node- rax-ord- 6478210- 359401' , u'00d62acd- 6f3b-4cb8- 9668-12517c84b3 b9')" released by "nova.scheduler .host_manager. _locked_ update" :: held 0.002s inner /usr/local/ lib/python2. 7/dist- packages/ oslo_concurrenc y/lockutils. py:282 rpc.server [req-6ce318a2- 53e5-4b80- 84a5-f896ab48d6 27 tempest- BaremetalMultit enancy- 1820943769 tempest- BaremetalMultit enancy- 1820943769] Exception during message handling rpc.server Traceback (most recent call last): rpc.server File "/usr/local/ lib/python2. 7/dist- packages/ oslo_messaging/ rpc/server. py", line 155, in _process_incoming rpc.server res = self.dispatcher .dispatch( message) rpc.server File "/usr/local/ lib/python2. 7/dist- packages/ oslo_messaging/ rpc/dispatcher. py", line 222, in dispatch rpc.server return self._do_ dispatch( endpoint, method, ctxt, args) rpc.server File "/usr/local/ lib/python2. 7/dist- packages/ oslo_messaging/ rpc/dispatcher. py", line 192, in _do_dispatch rpc.server result = func(ctxt, **new_args) rpc.server File "/usr/local/ lib/python2. 7/dist- packages/ oslo_messaging/ rpc/server. py", line 218, in inner rpc.server return func(*args, **kwargs) rpc.server File "/opt/stack/ new/nova/ nova/scheduler/ manager. py", line 84, in select_destinations rpc.server dests = self.driver. select_ destinations( ctxt, spec_obj) rpc.server File "/opt/stack/ new/nova/ nova/scheduler/ filter_ scheduler. py", line 51, in select_destinations rpc.server selected_hosts = self._schedule( context, spec_obj) rpc.server File "/opt/stack/ new/nova/ nova/scheduler/ filter_ scheduler. py", line 96, in _schedule rpc.server hosts = self._get_ all_host_ states( elevated) rpc.server File "/opt/stack/ new/nova/ nova/scheduler/ filter_ scheduler. py", line 134, in _get_all_ host_states rpc.server return self.host_ manager. get_all_ host_states( context) rpc.server File "/opt/stack/ new/nova/ nova/scheduler/ host_manager. py", line 610, in get_all_host_states rpc.server self._get_ instance_ info(context, compute)) rpc.server File "/opt/stack/ new/nova/ nova/scheduler/ host_manager. py", line 180, in update rpc.server return _locked_ update( self, compute, service, aggregates, inst_dict) rpc.server File "/usr/local/ lib/python2. 7/dist- packages/ oslo_concurrenc y/lockutils. py", line 271, in inner rpc.server return f(*args, **kwargs) rpc.server File "/opt/stack/ new/nova/ nova/scheduler/ host_manager. py", line 169, in _locked_update rpc.server self._update_ from_compute_ node(compute) rpc.server File "/opt/stack/ new/nova/ nova/scheduler/ ironic_ host_manager. py", line 46, in _update_ from_compute_ node rpc.server self.free_disk_mb = compute. free_disk_ gb * 1024 rpc.server TypeError: unsupported operand type(s) for *: 'NoneType' and 'int' rpc.server
2017-01-04 19:05:16.398 20709 DEBUG oslo_concurrenc
2017-01-04 19:05:16.398 20709 DEBUG nova.scheduler.
2017-01-04 19:05:16.400 20709 DEBUG oslo_concurrenc
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.
We might be hitting a race where the compute node record is created by the resource tracker before it's resources are updated.