test_volume_swap failed in Kaminario Cinder Driver CI

Bug #1662820 reported by nikeshkm
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

Following test is failing in our Kaminario Cinder Driver CI:
tempest.api.compute.admin.test_volume_swap.TestVolumeSwap.test_volume_swap [421.842922s] ... FAILED

Following is Traceback in the n-cpu.log:
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [req-3dcb5fb5-ca83-4e80-bfae-dbc96cc4d0de tempest-TestVolumeSwap-1094009596 tempest-TestVolumeSwap-1094009596] [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] Failed to swap volume a86302ea-9104-4084-9825-b863156f4964 for 48910757-0f3b-47ba-a278-e36ddf62d415
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] Traceback (most recent call last):
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] File "/opt/stack/new/nova/nova/compute/manager.py", line 4982, in _swap_volume
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] resize_to)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1303, in swap_volume
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] self._swap_volume(guest, disk_dev, conf.source_path, resize_to)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1264, in _swap_volume
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] dev.abort_job(pivot=True)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] File "/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 704, in abort_job
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] self._guest._domain.blockJobAbort(self._disk, flags=flags)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] result = proxy_call(self._autowrap, f, *args, **kwargs)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] rv = execute(f, *args, **kwargs)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] six.reraise(c, e, tb)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] rv = meth(*args, **kwargs)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 650, in blockJobAbort
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] if ret == -1: raise libvirtError ('virDomainBlockJobAbort() failed', dom=self)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] libvirtError: Requested operation is not valid: pivot of disk 'vdb' requires an active copy job

Other details:
virt_type = qemu

$ dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-20131111.c3d1e78-2ubuntu1.1 all PXE boot firmware - ROM images for qemu
ii qemu-keymaps 2.0.0+dfsg-2ubuntu1.31 all QEMU keyboard maps
ii qemu-system 2.0.0+dfsg-2ubuntu1.31 amd64 QEMU full system emulation binaries
ii qemu-system-arm 2.0.0+dfsg-2ubuntu1.31 amd64 QEMU full system emulation binaries (arm)
ii qemu-system-common 2.0.0+dfsg-2ubuntu1.31 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-mips 2.0.0+dfsg-2ubuntu1.31 amd64 QEMU full system emulation binaries (mips)
ii qemu-system-misc 2.0.0+dfsg-2ubuntu1.31 amd64 QEMU full system emulation binaries (miscelaneous)
ii qemu-system-ppc 2.0.0+dfsg-2ubuntu1.31 amd64 QEMU full system emulation binaries (ppc)
ii qemu-system-sparc 2.0.0+dfsg-2ubuntu1.31 amd64 QEMU full system emulation binaries (sparc)
ii qemu-system-x86 2.0.0+dfsg-2ubuntu1.31 amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 2.0.0+dfsg-2ubuntu1.31 amd64 QEMU utilities

$ dpkg -l | grep libvirt
ii libvirt-bin 1.2.2-0ubuntu13.1.17 amd64 programs for the libvirt library
ii libvirt-dev 1.2.2-0ubuntu13.1.17 amd64 development files for the libvirt library
ii libvirt0 1.2.2-0ubuntu13.1.17 amd64 library for interfacing with different virtualization systems

Revision history for this message
Lee Yarwood (lyarwood) wrote :

This smells like https://bugzilla.redhat.com/show_bug.cgi?id=1202704 that would make sense given I can't reproduce in F25 and we haven't seen this in any Xenial based jobs that are using libvirt 1.3.1-1. IMHO I think you should just move to Xenial ASAP.

Revision history for this message
nikeshkm (nike-niec) wrote :

Ok, I will try to use Xenial as a diskimage in our Kaminario CI nodepool VMS.

Matt Riedemann (mriedem)
Changed in nova:
status: New → Invalid
Revision history for this message
nikeshkm (nike-niec) wrote :

Hi, the issue is solved using Xenial as a diskimage in our Kaminario CI.
Thanks for the suggestions.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.