libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the gate
| Affects | Status | Importance | Assigned to | Milestone | |
|---|---|---|---|---|---|
| | OpenStack Compute (nova) |
Undecided
|
Unassigned | ||
Bug Description
Seeing this here:
http://
2014-06-24 23:15:41.714 | tempest.
2014-06-24 23:15:41.714 | -------
2014-06-24 23:15:41.714 |
2014-06-24 23:15:41.714 | Captured traceback-1:
2014-06-24 23:15:41.714 | ~~~~~~~
2014-06-24 23:15:41.715 | Traceback (most recent call last):
2014-06-24 23:15:41.715 | File "tempest/
2014-06-24 23:15:41.715 | resp, body = self.delete(
2014-06-24 23:15:41.715 | File "tempest/
2014-06-24 23:15:41.715 | return self.request(
2014-06-24 23:15:41.715 | File "tempest/
2014-06-24 23:15:41.715 | resp, resp_body)
2014-06-24 23:15:41.715 | File "tempest/
2014-06-24 23:15:41.715 | raise exceptions.
2014-06-24 23:15:41.715 | NotFound: Object not found
2014-06-24 23:15:41.715 | Details: {"itemNotFound": {"message": "Image not found.", "code": 404}}
2014-06-24 23:15:41.716 |
2014-06-24 23:15:41.716 |
2014-06-24 23:15:41.716 | Captured traceback:
2014-06-24 23:15:41.716 | ~~~~~~~~~~~~~~~~~~~
2014-06-24 23:15:41.716 | Traceback (most recent call last):
2014-06-24 23:15:41.716 | File "tempest/
2014-06-24 23:15:41.716 | self.server_
2014-06-24 23:15:41.716 | File "tempest/
2014-06-24 23:15:41.716 | 'ACTIVE')
2014-06-24 23:15:41.716 | File "tempest/
2014-06-24 23:15:41.716 | raise_on_
2014-06-24 23:15:41.717 | File "tempest/
2014-06-24 23:15:41.717 | raise exceptions.
2014-06-24 23:15:41.717 | TimeoutException: Request timed out
2014-06-24 23:15:41.717 | Details: (ImagesOneServe
Looks like it's trying to delete image with uuid 518a32d0-
This is maybe related to bug 1320617 as a general performance issue with glance.
Looking in the glance registry log, the image is created here:
2014-06-24 22:51:23.538 15740 INFO glance.
The image is deleted here:
2014-06-24 22:54:53.146 15740 INFO glance.
And the 'not found' is here:
2014-06-24 22:54:56.508 15740 INFO glance.
| Matt Riedemann (mriedem) wrote : | #1 |
| Matt Riedemann (mriedem) wrote : | #2 |
Here is a logstash query on the tempest failure:
message:
18 hits in 2 days.
| Matt Riedemann (mriedem) wrote : | #3 |
Nova bug 1255624 is tracking libvirt connection reset errors, in that case it failed during virDomainSuspend, here it fails during virDomainBlockJ
| tags: | added: libvirt |
| Matt Riedemann (mriedem) wrote : | #4 |
Here is a logstash query on the libvirt connection reset error:
159 hits in 2 days, there is a small successful percentage when that shows up.
Looking at the tests when this fails, they are doing image snapshots so this seems like a pretty good query.
| Changed in nova: | |
| importance: | Undecided → High |
| no longer affects: | glance |
| summary: |
- test_images_oneserver times out in tearDown during task_state - "image_pending_upload" + snapshot hangs when libvirt connection is reset |
e-r patch: https:/
Looks like this really spiked on 6/24 and goes down again on 6/25.
| Sean Dague (sdague) wrote : | #6 |
Going down on 6/25 is a mirage, we're just backed up on ES data.
| Matt Riedemann (mriedem) wrote : | #7 |
Wondering if bug 1193146 has any interesting historical information.
| Ken'ichi Ohmichi (oomichi) wrote : | #8 |
I also faced this problem many times today, and most failure happened on ListImageFilter
I'm not sure why it does not happen on ListImageFilter
| Sean Dague (sdague) wrote : | #9 |
It looks like there is a very particular explode around _live_snapshot. The bug actually seems to only explode when we are in _live_snapshot, and not any other cases. (Modifying the elastic recheck search string for _live_snapshot has the same # of counts).
| Changed in nova: | |
| assignee: | nobody → Sean Dague (sdague) |
| summary: |
- snapshot hangs when libvirt connection is reset + libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the gate |
| Sean Dague (sdague) wrote : | #10 |
It's worth noting that the _live_snapshot code path was never tested by us until the trusty update, as it was hidden behind a version flag that meant we didn't run it before in the gate.
| Matt Riedemann (mriedem) wrote : | #11 |
This should get us around the gate failures for now https:/
| Matt Riedemann (mriedem) wrote : | #12 |
@Ken'ichi, per comment 8, it's a pretty intensive setup:
The setup creates 2 servers and then 3 snapshots from those 2 servers, and the JSON and XML test classes are running concurrently, so we could be creating multiple snapshots concurrently which is in theory overloading libvirt/qemu and causing the connection reset with libvirt.
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: master
commit c1c159460de376a
Author: Sean Dague <email address hidden>
Date: Wed Jun 25 16:56:04 2014 -0400
effectively disable libvirt live snapshotting
As being seen in the gate, libvirt 1.2.2 doesn't appear to actually
handle live snapshotting under any appreciable load (possibly
related to parallel operations). It isn't a 100% failure, but it's
currently being hit in a large number of runs.
Effectively turn this off by increasing the
MIN_
exist. This can get us back to a working state, then we can decide
if live snapshotting is something that can be made to actually work
under load.
DocImpact
Related-Bug: #1334398
Change-Id: I9908b743df2093
| Kashyap Chamarthy (kashyapc) wrote : | #14 |
Thought I'd add the below.
I just created a simple test[1] which creates an external live snapshot[2] of a libvirt guest (with versions affecting the gate -- libvirt 1.2.2 and QEMU 2.0 on Fedora 20), by executing the below command in a loop of 100 (I also tested for 1000, it ran just fine too). I ran the script for 3 virtual machines in parallel.
$ virsh snapshot-create-as --domain $DOMAIN \
--name snap-$i \
--description snap$i-desc \
--disk-only \
--diskspec hda,snapshot=
--atomic
Result of a 100 loop run would be an image with a backing chain of 100 qcow2 images[3].
The above script just creates a snapshot, nothing more. Matt Riedemann pointed out on #openstack-nova that on Gate there could be others tests running concurrently that could be doing things like suspend/
[1] https:/
[2] "external live snapshot "meaning: Every time you take a snapshot, the current disk becomes a (read-only) 'backing file' and a new qcow2 overlay is created to track the current 'delta'.
[3] http://
| Daniel Berrange (berrange) wrote : | #15 |
The interesting thing in the logs is the stack trace about virDomainBlockJ
Nova issues this API call right at the start of the snapshot funtion to validated there's no old stale job left over:
# Abort is an idempotent operation, so make sure any block
# jobs which may have failed are ended. This operation also
# confirms the running instance, as opposed to the system as a
# whole, has a new enough version of the hypervisor (bug 1193146).
try:
As the comment says, aborting the job is supposed to be a completely safe thing todo. We don't even expect any existing job to be running, so it should basically end up as a no-op inside QEMU.
Now the error message libvirt reports when virDomaniBlockJ
libvirtError: Unable to read from monitor: Connection reset by peer
This is a generic message you get when QEMU crashes & burns unexpectedly, causing the monitor connection to be dropped.
We've not even got as far as running the libvirt snapshot API at this point when QEMU crashes & burns. This likely explains why Kashyap can't see the error in this test script which just invokes snapshot.
This all points the finger towards a flaw in QEMU of some kind, but there's no easy way to figure out what this might be from the libvirtd logs.
What we need here is the /var/log/
| Daniel Berrange (berrange) wrote : | #16 |
This is the service I was talking about
https:/
We need to re-configure it to collect core dumps as it is disabled by default. Then capture any crashes in
/var/crash/
| Daniel Berrange (berrange) wrote : | #17 |
Sorry, I was looking at the wrong blockJobAbort call in the code earlier. The actual one that is failing is in this code
# NOTE (rmk): Establish a temporary mirror of our root disk and
# issue an abort once we have a complete copy.
while self._wait_
So we've done a block rebase, and then when we finish waiting for it to finish, we abort the job and at that point we see the crashed QEMU. The QEMU logs are stuff something useful to get
| Daniel Berrange (berrange) wrote : | #18 |
I'm actually beginning to wonder if there is a flaw in the tempest tests rather than in QEMU. The " Unable to read from monitor: Connection reset by peer" error message can actually indicate that a second thread has killed QEMU, while the first thread is talking to it - so this is a potential alternative idea to explore vs my previous QEMU-SEGV bug theory.
I've been examining the screen-n-cpu.log file to see what happens with instance 90c79adf-
First I see the snapshot process starting
2014-06-24 22:51:24.314 INFO nova.virt.
Then I see the something killing this very same instance
2014-06-24 22:54:40.255 AUDIT nova.compute.
And a lifecycle event to show that it was killed
2014-06-24 22:54:51.033 16186 INFO nova.compute.
then we see the snapshot process crash & burn
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
So this looks very much to me like something in the test is killing the instance while the snapshot is still being done
Now, as for why this doesn't affect non-live snapshots we were testing before...
For non-live snapshots, we issue a 'managedSave' call, this terminates the guest. Then we do the snapshot process. Then we start up the guest against from the managed save image. My guess is that this race-ing 'Terminate instance' call is happening while the guest is already shutdown and hence does not cause a failure of the test suite when doing non-live snapshot (or at least the window in which the race could hit is dramatically smaller).
So based on the sequence in the screen-n-cpu.log file my money is currently on a race in the test scripts where something explicitly kills the instance while snapshot is being taken, and that the non-live snapshot code is not exposed to the race.
| Sean Dague (sdague) wrote : | #19 |
We do kill the snapshot if it exceeds the timeout, which is currently 196s, because at some point we need to actually move on. When these are successful, they typically succeed in about 10s.
| Vish Ishaya (vishvananda) wrote : | #20 |
Ok so it looks like the problem is that the snaphsot is not completing in a reasonable amount of time. The time stamps look like it took 2.5 minutes before it was killed which aligns with the above. So it looks like the BlockMirror is not completing.
| Kashyap Chamarthy (kashyapc) wrote : | #21 |
After looking a little bit more (at' _live_snapshot' function in Nova[*]) , the below seem to be the precise equivalent sequence of (libvirt) operations of what's happening in Nova's '_live_snapshot' function. Thanks to libvirt developer Eric Blake for reviewing this:
(0) Take the Libvirt guest's XML backup:
$ virsh dumpxml --inactive vm1 > /var/tmp/vm1.xml
(1) Abort any failed/finished block operations:
$ virsh blockjob vm1 vda --abort
(2) Undefine a running domain. (Note: Undefining a running domain does not _kill_ the domain, it just converts it from persistent to
transient.)
$ virsh undefine vm1
(3) Invoke 'virsh blockcopy' (This will take time, depending on the size of disk image vm1):
$ virsh blockcopy \
--domain vm1 vda \
--wait \
--verbose
(4) Abort any failed/finished block operations: (as Dan pointed out in comment #17, this the abort operation where QEMU
might be failing):
$ virsh blockjob vm1 vda --abort
NOTE: If we use '--finish' command in step 3 it is equivalent to the
above command (consequently, step 4 can be skipped).
(5) Define the guest again (to make it persistent):
$ virsh define /var/tmp/vm1.xml
(6) From the obtained new copy, convert the QCOW2 with a backing file to a flat (raw) image with no backing file:
$ qemu-img convert -f qcow2 -O raw vm1.qcow2 conv-vm1.img
Notes (from Eric Blake):
The _live_snapshot function concludes it all with redefining the
domain (umm, that part looks fishy in the code - you undefine it
only if it was persistent, but redefine the domain unconditionally;
so if you call your function on a domain that is initially
transient, you end up with a persistent domain at the end of your
function).
[*] https:/
| Dan Genin (daniel-genin) wrote : | #22 |
FWIW, I have encountered libvirt connection reset errors when my DevStack VM ran low on memory. I first saw these when the more recent versions of DevStack started spawning numerous nova-api and nova-conductor instances, which ate up the relatively small RAM of the VM. When this happened, I was unable to boot any instances, with libvirt connection reset errors reported in the n-cpu log. Not sure that low memory is the what's causing errors here (maybe some other resource starvation) but they look awfully similar.
| Kashyap Chamarthy (kashyapc) wrote : | #23 |
Here's some investigation of what happens when _live_snapshot is
invoked, at Libvirt level. I performed the live_snapshot test w/ current
git (after I modified MIN_LIBVIRT_
purpose) with below log filters in /etc/libvirt/
restared libvirt):
log_level = 1
log_
Find what QMP commands libvirt is sending to QEMU
log_
Libvirt call sequence (More
-------
(1) virDomainGetXMLDesc
(2) virDomainBlockJ
(3) virDomainUndefine
(4) virDomainBlockR
- NOTE (from libvirt documentation): By default, the copy job runs
in the background, and consists of two phases: (a) the block
operation copies all data from the source, and during this phase,
the job can only be canceled to revert back to the source disk,
with no guarantees about the destination. (b) After phase (a0
completes, both the source and the destination remain mirrored
until a call to the block opertation with the --abort.
(5) virDomainBlockJ
Test
----
Boot a new Nova instance:
$ nova boot --flavor 1 --key_name oskey1 --image \
Issue a snapshot (this should trigger the _live_snapshot code path):
$ nova image-create --poll cvm1 snap1-cvm1
Ensure that "live snapshot" _did_ take place by searching the
'screen-n-cpu.log':
$ grep -i "Beginning live snapshot process" ../data/
2014-06-30 03:34:32.237 INFO nova.virt.
$
Libvirt logs
------------
(1) Save copy of libvirt XML(virDomainGe
----
2014-06-30 09:08:13.586+0000: 8470: debug : virDomainGetXML
----
(2) Issue a BlockJobAbort (virDomainBlock
----
2014-06-30 09:08:13.632+0000: 8470: debug : virDomainBlockJ
----
(3) Undefining the running libvirt domain (virDomainUndef
----
2014-06-30 09:08:14.069+0000: 8471: debug : virDomainUndefi
2014-06-30 09:08:14.069+0000: 8471: info : qemuDomainUndef
----
We'll define the guest again, further below from the saved copy from step(1).
[*] Reasoning for making the domain transient: BlockRebase ('blockcopy')
jobs last forever until canceled, which implies that they should last
across domain restarts if the domain were persistent. But, QEMU doesn't
yet provide a way to restart a copy job on domain restart (while
mirroring is still intact). So the trick is to tempo...
| Daniel Berrange (berrange) wrote : | #24 |
I've added some more debugging to the live snapshot code in this change:
https:/
When it failed in this test run:
http://
I see
2014-06-30 11:55:55.398+0000: 18078: debug : virDomainGetBlo
2014-06-30 11:55:55.415 WARNING nova.virt.
2014-06-30 11:55:56.074+0000: 18071: debug : virDomainGetBlo
2014-06-30 11:55:56.094 WARNING nova.virt.
This shows that as far as virDomainGetBlo
We then go into a virDomainBlockJ
2014-06-30 11:55:56.127+0000: 18070: debug : virDomainBlockJ
This should take a fraction of a second, but after 3 minute it still isn't done. Tempest gets fed up waiting and so issues a call to destroy the guest:
2014-06-30 11:59:10.341+0000: 18090: debug : virDomainDestro
Shortly thereafter QEMU is dead and the virDomainBlockJ
2014-06-30 11:59:21.279 17542 TRACE nova.compute.
So, based on this debug info I think that Nova is doing the right thing, and this is probably a bug in QEMU (or possibly, but unlikely, a bug in libvirt). My inclination is that QEMU is basically hanging in the block job abort call, due to some fairly infrequently hit race condition.
| Daniel Berrange (berrange) wrote : | #27 |
I have managed to capture a failure with verbose libvirtd.log enabled.
http://
http://
2014-07-09 11:05:50.701+0000: 21774: debug : virDomainBlockJ
2014-07-09 11:05:50.701+0000: 21774: debug : qemuDomainObjBe
2014-07-09 11:05:50.701+0000: 21774: debug : qemuDomainObjBe
2014-07-09 11:05:50.701+0000: 21774: debug : qemuDomainObjEn
2014-07-09 11:05:50.701+0000: 21774: debug : qemuMonitorBloc
2014-07-09 11:05:50.701+0000: 21774: debug : qemuMonitorJSON
2014-07-09 11:05:50.701+0000: 21774: debug : qemuMonitorSend:959 : QEMU_MONITOR_
2014-07-09 11:05:50.705+0000: 21774: debug : qemuMonitorJSON
2014-07-09 11:05:50.705+0000: 21774: debug : qemuDomainObjEx
2014-07-09 11:05:50.705+0000: 21774: debug : qemuDomainObjEn
2014-07-09 11:05:50.705+0000: 21774: debug : qemuMonitorBloc
2014-07-09 11:05:50.705+0000: 21774: debug : qemuMonitorJSON
2014-07-09 11:05:50.705+0000: 21774: debug : qemuMonitorSend:959 : QEMU_MONITOR_
2014-07-09 11:05:50.709+0000: 21774: debug : qemuMonitorJSON
2014-07-09 11:05:50.709+0000: 21774: debug : qemuDomainObjEx
2014-07-09 11:05:50.759+0000: 21774: debug : qemuDomainObjEn
2014-07-09 11:05:50.759+0000: 21774: debug : qemuMonitorBloc
| Kashyap Chamarthy (kashyapc) wrote : | #28 |
Ping.
I recall DanB noted somewhere in a mailing list thread and the QEMU devs I spoke to also suggested -- one of the possible next steps to investigate this is to have GDB attached to the hanging QEMU in the CI gate to get stack traces out of it.
| Sean Dague (sdague) wrote : | #29 |
We've basically just disabled this feature until someone can dig into why qemu doesn't work with it
| Changed in nova: | |
| status: | New → Confirmed |
| assignee: | Sean Dague (sdague) → nobody |
| Changed in nova: | |
| assignee: | nobody → Chet Burgess (cfb-n) |
| Chet Burgess (cfb-n) wrote : | #30 |
This came up in the ML and in openstack-nova IRC chat today.
I'm going to look into adding some sane libvirt/qemu version checking to the feature as well as wrapping the feature with some type of config option so that its possible to conditionally enable it on different platforms.
| Chet Burgess (cfb-n) wrote : | #31 |
This came up again today in openstack-nova and recently on the ML.
I'll look at adding some version guarding around this feature. It defiantly works on precise with libvirtd 1.1.3.5 and qemu 1.5 as we have been running it in production with those version. So I think some basic version guarding should be sufficient.
mriedem has also requested the ability to conditionally enable it on different OS gates for easier testing so I will work on that as well.
| Changed in nova: | |
| status: | Confirmed → In Progress |
Related fix proposed to branch: master
Review: https:/
Related fix proposed to branch: master
Review: https:/
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: master
commit 3f69bca12c484a0
Author: Tony Breeds <email address hidden>
Date: Tue Jan 27 14:05:02 2015 -0800
Use a workarounds group option to disable live snaphots.
Create a workarounds option to disable live snapshotting rather than
hack MIN_LIBVIRT_
DocImpact
Related-Bug: #1334398
Change-Id: Iee9afc0afaffa1
| Matt Riedemann (mriedem) wrote : | #35 |
https:/
Change abandoned by Joe Gordon (<email address hidden>) on branch: master
Review: https:/
Reason: This review is > 4 weeks without comment and currently blocked by a core reviewer with a -2. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and contacting the reviewer with the -2 on this review to ensure you address their concerns.
| Changed in nova: | |
| assignee: | Chet Burgess (cfb-n) → nobody |
Solving an inconsistency: The bug is 'In Progress' but without an assignee. I set the status back to the last known status before the change to 'In Progress'.
Feel free to assign the bug to yourself. If you do so, please set it to 'In Progress'.
| Changed in nova: | |
| status: | In Progress → Confirmed |
| Changed in nova: | |
| assignee: | nobody → Pranav Salunke (dguitarbite) |
| Changed in nova: | |
| assignee: | Pranav Salunke (dguitarbite) → nobody |
| Matt Riedemann (mriedem) wrote : | #38 |
At some point soon in Newton we'll have ubuntu 16.04 in the gate jobs with libvirt 1.3.1, we should try turning this back on and see if it works.
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.
If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: <RELEASE_NAME>"
Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
Valid example: CONFIRMED FOR: LIBERTY
| Changed in nova: | |
| importance: | High → Undecided |
| status: | Confirmed → Expired |
CONFIRMED FOR: NEWTON
| Changed in nova: | |
| status: | Expired → Confirmed |


The n-cpu logs have several errors for the libvirt connection being reset:
http:// logs.openstack. org/70/ 97670/5/ check/check- tempest- dsvm-postgres- full/7d4c7cf/ logs/screen- n-cpu.txt. gz?level= TRACE#_ 2014-06- 24_22_54_ 52_973
2014-06-24 22:54:52.973 16186 TRACE nova.compute. manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] Traceback (most recent call last): manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] File "/opt/stack/ new/nova/ nova/compute/ manager. py", line 352, in decorated_function manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] *args, **kwargs) manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] File "/opt/stack/ new/nova/ nova/compute/ manager. py", line 2788, in snapshot_instance manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] task_states. IMAGE_SNAPSHOT) manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] File "/opt/stack/ new/nova/ nova/compute/ manager. py", line 2819, in _snapshot_instance manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] update_task_state) manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] File "/opt/stack/ new/nova/ nova/virt/ libvirt/ driver. py", line 1532, in snapshot manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] image_format) manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] File "/opt/stack/ new/nova/ nova/virt/ libvirt/ driver. py", line 1631, in _live_snapshot manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] domain. blockJobAbort( disk_path, 0) manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] File "/usr/lib/ python2. 7/dist- packages/ eventlet/ tpool.py" , line 179, in doit manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] result = proxy_call( self._autowrap, f, *args, **kwargs) manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] File "/usr/lib/ python2. 7/dist- packages/ eventlet/ tpool.py" , line 139, in proxy_call manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] rv = execute( f,*args, **kwargs) manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] File "/usr/lib/ python2. 7/dist- packages/ eventlet/ tpool.py" , line 77, in tworker manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] rv = meth(*args, **kwargs) manager [instance: 90c79adf- 4df1-497c- a786-13bdc5cca9 8d] File "/usr/lib/ python2. 7/dist- packages/ libvirt. py", line 646, in blockJobAbort
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973 16186 TRACE nova.compute.
2014-06-24 22:54:52.973...