The problem was found on the 5.1.2 staging job
http://jenkins-product.srt.mirantis.net:8080/job/5.1.2.staging.centos.bvt_1/60/console
OSTF test failed:
AssertionError: Failed tests, fails: 1 should fail: 0 failed tests name: [{u'Launch instance, create snapshot, launch instance from snapshot (failure)': u'Timed out waiting to become ACTIVE Please refer to OpenStack logs for more details.'}]
I've found the following errors in nova-compute.log:
/var/log/nova-all.log:1464:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] Traceback (most recent call last):
/var/log/nova-all.log:1465:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 351, in decorated_function
/var/log/nova-all.log:1466:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] *args, **kwargs)
/var/log/nova-all.log:1467:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2761, in snapshot_instance
/var/log/nova-all.log:1468:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] task_states.IMAGE_SNAPSHOT)
/var/log/nova-all.log:1469:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2792, in _snapshot_instance
/var/log/nova-all.log:1470:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] update_task_state)
/var/log/nova-all.log:1471:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1580, in snapshot
/var/log/nova-all.log:1472:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] image_format)
/var/log/nova-all.log:1473:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1679, in _live_snapshot
/var/log/nova-all.log:1474:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] domain.blockJobAbort(disk_path, 0)
/var/log/nova-all.log:1475:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 179, in doit
/var/log/nova-all.log:1476:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] result = proxy_call(self._autowrap, f, *args, **kwargs)
/var/log/nova-all.log:1477:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 139, in proxy_call
/var/log/nova-all.log:1478:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] rv = execute(f,*args,**kwargs)
/var/log/nova-all.log:1479:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 77, in tworker
/var/log/nova-all.log:1480:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] rv = meth(*args,**kwargs)
/var/log/nova-all.log:1481:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib64/python2.6/site-packages/libvirt.py", line 662, in blockJobAbort
/var/log/nova-all.log:1482:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] if ret == -1: raise libvirtError ('virDomainBlockJobAbort() failed', dom=self)
/var/log/nova-all.log:1483:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] libvirtError: Unable to read from monitor: Connection reset by peer
After I reverted the environment, I was able to spawn instances and OSTF also succeeded.
This has only been seen once on 5.1.2 so far. Put it into Incomplete for 6.0.1/6.1 for now. For 5.1.2 (which is never going to be released, AFAIK), this is Medium, IMO.
If we see it again, we'll increase the priority (put it back to Confirmed, etc).