Going back to the original logs, the instance does show up with the build failure in the nova scheduler logs here:
http://logs.openstack.org/66/54966/2/check/check-tempest-devstack-vm-full/d611ed0/logs/screen-n-sch.txt.gz#_2013-11-19_22_06_38_278
Which then shows up in the n-cpu logs here:
http://logs.openstack.org/66/54966/2/check/check-tempest-devstack-vm-full/d611ed0/logs/screen-n-cpu.txt.gz#_2013-11-19_22_06_33_235
2013-11-19 22:06:33.235 ERROR nova.compute.manager [req-873bc199-a51e-4f4c-a085-3ae084c0d4dc ServersNegativeTestJSON-tempest-616305902-user ServersNegativeTestJSON-tempest-616305902-tenant] [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] Instance failed to spawn 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] Traceback (most recent call last): 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/opt/stack/new/nova/nova/compute/manager.py", line 1436, in _spawn 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] block_device_info) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2107, in spawn 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] block_device_info, context=context) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3296, in _create_domain_and_network 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] self.firewall_driver.setup_basic_filtering(instance, network_info) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/opt/stack/new/nova/nova/virt/libvirt/firewall.py", line 305, in setup_basic_filtering 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] self.nwfilter.setup_basic_filtering(instance, network_info) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/opt/stack/new/nova/nova/virt/libvirt/firewall.py", line 134, in setup_basic_filtering 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] vif)) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/opt/stack/new/nova/nova/virt/libvirt/firewall.py", line 249, in _define_filter 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] self._conn.nwfilterDefineXML(xml) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 179, in doit 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] result = proxy_call(self._autowrap, f, *args, **kwargs) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in proxy_call 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] rv = execute(f,*args,**kwargs) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in tworker 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] rv = meth(*args,**kwargs) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] File "/usr/lib/python2.7/dist-packages/libvirt.py", line 2651, in nwfilterDefineXML 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] if ret is None:raise libvirtError('virNWFilterDefineXML() failed', conn=self) 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26] libvirtError: Cannot recv data: Connection reset by peer 2013-11-19 22:06:33.235 24775 TRACE nova.compute.manager [instance: 62bfeebd-8878-477f-9eac-a8b21ec5ac26]
This may be similar to bug 1255624, but the error is a bit different.
Here is the query:
message:"libvirtError: Cannot recv data: Connection reset by peer" AND filename:"logs/screen-n-cpu.txt"
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibGlidmlydEVycm9yOiBDYW5ub3QgcmVjdiBkYXRhOiBDb25uZWN0aW9uIHJlc2V0IGJ5IHBlZXJcIiBBTkQgZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiYWxsIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDg0MTk1MjExNH0=
That has 3 hits in the last 2 weeks, so looks like it's still a problem. I'm surprised there isn't an existing bug/query for this yet.
Going back to the original logs, the instance does show up with the build failure in the nova scheduler logs here:
http:// logs.openstack. org/66/ 54966/2/ check/check- tempest- devstack- vm-full/ d611ed0/ logs/screen- n-sch.txt. gz#_2013- 11-19_22_ 06_38_278
Which then shows up in the n-cpu logs here:
http:// logs.openstack. org/66/ 54966/2/ check/check- tempest- devstack- vm-full/ d611ed0/ logs/screen- n-cpu.txt. gz#_2013- 11-19_22_ 06_33_235
2013-11-19 22:06:33.235 ERROR nova.compute. manager [req-873bc199- a51e-4f4c- a085-3ae084c0d4 dc ServersNegative TestJSON- tempest- 616305902- user ServersNegative TestJSON- tempest- 616305902- tenant] [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] Instance failed to spawn manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] Traceback (most recent call last): manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/opt/stack/ new/nova/ nova/compute/ manager. py", line 1436, in _spawn manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] block_device_info) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/opt/stack/ new/nova/ nova/virt/ libvirt/ driver. py", line 2107, in spawn manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] block_device_info, context=context) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/opt/stack/ new/nova/ nova/virt/ libvirt/ driver. py", line 3296, in _create_ domain_ and_network manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] self.firewall_ driver. setup_basic_ filtering( instance, network_info) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/opt/stack/ new/nova/ nova/virt/ libvirt/ firewall. py", line 305, in setup_basic_ filtering manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] self.nwfilter. setup_basic_ filtering( instance, network_info) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/opt/stack/ new/nova/ nova/virt/ libvirt/ firewall. py", line 134, in setup_basic_ filtering manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] vif)) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/opt/stack/ new/nova/ nova/virt/ libvirt/ firewall. py", line 249, in _define_filter manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] self._conn. nwfilterDefineX ML(xml) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/usr/local/ lib/python2. 7/dist- packages/ eventlet/ tpool.py" , line 179, in doit manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] result = proxy_call( self._autowrap, f, *args, **kwargs) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/usr/local/ lib/python2. 7/dist- packages/ eventlet/ tpool.py" , line 139, in proxy_call manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] rv = execute( f,*args, **kwargs) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/usr/local/ lib/python2. 7/dist- packages/ eventlet/ tpool.py" , line 77, in tworker manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] rv = meth(*args, **kwargs) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] File "/usr/lib/ python2. 7/dist- packages/ libvirt. py", line 2651, in nwfilterDefineXML manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] if ret is None:raise libvirtError( 'virNWFilterDef ineXML( ) failed', conn=self) manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26] libvirtError: Cannot recv data: Connection reset by peer manager [instance: 62bfeebd- 8878-477f- 9eac-a8b21ec5ac 26]
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
2013-11-19 22:06:33.235 24775 TRACE nova.compute.
This may be similar to bug 1255624, but the error is a bit different.
Here is the query:
message: "libvirtError: Cannot recv data: Connection reset by peer" AND filename: "logs/screen- n-cpu.txt"
http:// logstash. openstack. org/#eyJzZWFyY2 giOiJtZXNzYWdlO lwibGlidmlydEVy cm9yOiBDYW5ub3Q gcmVjdiBkYXRhOi BDb25uZWN0aW9uI HJlc2V0IGJ5IHBl ZXJcIiBBTkQgZml sZW5hbWU6XCJsb2 dzL3NjcmVlbi1uL WNwdS50eHRcIiIs ImZpZWxkcyI6W10 sIm9mZnNldCI6MC widGltZWZyYW1lI joiYWxsIiwiZ3Jh cGhtb2RlIjoiY29 1bnQiLCJ0aW1lIj p7InVzZXJfaW50Z XJ2YWwiOjB9LCJz dGFtcCI6MTM5MDg 0MTk1MjExNH0=
That has 3 hits in the last 2 weeks, so looks like it's still a problem. I'm surprised there isn't an existing bug/query for this yet.