libvirt errors cause unstable instances spawning

Bug #1424996 reported by Sergey Kolekonov
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Invalid
High
MOS Nova
5.1.x
Won't Fix
Medium
MOS Nova
6.0.x
Invalid
High
MOS Nova

Bug Description

The problem was found on the 5.1.2 staging job
http://jenkins-product.srt.mirantis.net:8080/job/5.1.2.staging.centos.bvt_1/60/console

OSTF test failed:
AssertionError: Failed tests, fails: 1 should fail: 0 failed tests name: [{u'Launch instance, create snapshot, launch instance from snapshot (failure)': u'Timed out waiting to become ACTIVE Please refer to OpenStack logs for more details.'}]

I've found the following errors in nova-compute.log:

/var/log/nova-all.log:1464:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] Traceback (most recent call last):
/var/log/nova-all.log:1465:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 351, in decorated_function
/var/log/nova-all.log:1466:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] *args, **kwargs)
/var/log/nova-all.log:1467:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2761, in snapshot_instance
/var/log/nova-all.log:1468:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] task_states.IMAGE_SNAPSHOT)
/var/log/nova-all.log:1469:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2792, in _snapshot_instance
/var/log/nova-all.log:1470:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] update_task_state)
/var/log/nova-all.log:1471:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1580, in snapshot
/var/log/nova-all.log:1472:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] image_format)
/var/log/nova-all.log:1473:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1679, in _live_snapshot
/var/log/nova-all.log:1474:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] domain.blockJobAbort(disk_path, 0)
/var/log/nova-all.log:1475:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 179, in doit
/var/log/nova-all.log:1476:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] result = proxy_call(self._autowrap, f, *args, **kwargs)
/var/log/nova-all.log:1477:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 139, in proxy_call
/var/log/nova-all.log:1478:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] rv = execute(f,*args,**kwargs)
/var/log/nova-all.log:1479:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 77, in tworker
/var/log/nova-all.log:1480:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] rv = meth(*args,**kwargs)
/var/log/nova-all.log:1481:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] File "/usr/lib64/python2.6/site-packages/libvirt.py", line 662, in blockJobAbort
/var/log/nova-all.log:1482:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] if ret == -1: raise libvirtError ('virDomainBlockJobAbort() failed', dom=self)
/var/log/nova-all.log:1483:2015-02-23 08:30:18.181 28171 TRACE nova.compute.manager [instance: cbae7aea-eb7a-4a34-98c0-79a59f6d5755] libvirtError: Unable to read from monitor: Connection reset by peer

After I reverted the environment, I was able to spawn instances and OSTF also succeeded.

Tags: nova staging
summary: - libvirt errors causes unstable instances spawning
+ libvirt errors cause unstable instances spawning
Changed in mos:
milestone: 5.1.2 → 6.1
Changed in mos:
status: New → Confirmed
Revision history for this message
Roman Podoliaka (rpodolyaka) wrote :

This has only been seen once on 5.1.2 so far. Put it into Incomplete for 6.0.1/6.1 for now. For 5.1.2 (which is never going to be released, AFAIK), this is Medium, IMO.

If we see it again, we'll increase the priority (put it back to Confirmed, etc).

Changed in mos:
status: Confirmed → Incomplete
Revision history for this message
Dmitry Mescheryakov (dmitrymex) wrote :

The bug was never seen on 6.x and is in incomplete state for them for more than a month. Hence moving it to invalid state. Reopen if it occurs on 6.x.

Changed in mos:
status: Incomplete → Invalid
Revision history for this message
Vitaly Sedelnik (vsedelnik) wrote :

Won't Fix for 5.1.1-updates as this issue doesn't affect customer deployments

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.