fail to launch an instance from a volume snapshot with Gluster libgfapi

Bug #1287622 reported by Yogev Rabl
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Medium
Unassigned

Bug Description

Description of problem:
While testing around Bug 1020979, I've installed RHEL 6.5 on a volume (size, 50 GB), took a snapshot of the volume and tried to launch an instance with it.

The topology of RHOS is:
- Cloud controller + compute node.
- Stand alone compute node
- Stand alone Cinder with GlusterFS back end.
- Stand alone Glance

The compute logs:

2014-03-03 17:54:49.322 9544 INFO nova.virt.libvirt.firewall [req-c986c6d9-3c36-4bfe-943e-a80d62d15ae1 None None] [instance: 2ec5753e-70bc-428f-8a2c-15cd56a56400] Ensuring static filters
2014-03-03 17:54:50.554 9544 ERROR nova.openstack.common.threadgroup [-] End of file while reading data: Input/output error
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 117, in wait
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup x.wait()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 49, in wait
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return self.thread.wait()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 168, in wait
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return self._exit_event.wait()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in wait
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return hubs.get_hub().switch()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 187, in switch
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return self.greenlet.switch()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 194, in main
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup result = function(*args, **kwargs)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", line 65, in run_service
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup service.start()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/service.py", line 164, in start
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup self.manager.pre_start_hook()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 802, in pre_start_hook
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup self.update_available_resource(nova.context.get_admin_context())
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4886, in update_available_resource
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup nodenames = set(self.driver.get_available_nodes())
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/virt/driver.py", line 963, in get_available_nodes
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup stats = self.get_host_stats(refresh=refresh)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 4432, in get_host_stats
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return self.host_state.get_host_stats(refresh=refresh)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 386, in host_state
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup self._host_state = HostState(self)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 4832, in __init__
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup self.update_status()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 4886, in update_status
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup self.driver.get_pci_passthrough_devices()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3670, in get_pci_passthrough_devices
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup pci_dev = self._get_pcidev_info(name)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3624, in _get_pcidev_info
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup xmlstr = virtdev.XMLDesc(0)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup File "/usr/lib64/python2.6/site-packages/libvirt.py", line 3843, in XMLDesc
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup if ret is None: raise libvirtError ('virNodeDeviceGetXMLDesc() failed')
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup libvirtError: End of file while reading data: Input/output error
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup

Version-Release number of selected component (if applicable):
openstack-nova-common-2013.2.2-2.el6ost.noarch
openstack-nova-compute-2013.2.2-2.el6ost.noarch
python-novaclient-2.15.0-2.el6ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Configure the compute node to run with libgfapi
2. Install an OS on a volume.
3. Take a snapshot of the volume.
4. Launch an instance from the volume's snapshot.

Actual results:
The instance failed to launch.

Expected results:
The instance Active.

Additional info:

Tracy Jones (tjones-i)
tags: added: volumes
Changed in nova:
importance: Undecided → Medium
Revision history for this message
Thang Pham (thang-pham) wrote :

What version of gluster and qemu do you have? I believe you need qemu >=1.3 and glusterfs >= 3.4.

Revision history for this message
Thang Pham (thang-pham) wrote :

Also, does the new volume appear under your gluster mount point?

Revision history for this message
Thang Pham (thang-pham) wrote :

I had some trouble at first, but after re-compiling qemu to have glusterfs enabled, I was able to launch an instance from gluster volume on a compute node configured with libgfapi. This site helped me to get it working https://www.gluster.org/category/qemu/.

According to it:

While building QEMU from source, in addition to the normal configuration options, ensure that –enable-uuid and –enable-glusterfs options are is specified explicitly with ./configure script. (Update Feb 2013: A fix in QEMU-1.3 time frame makes the use of –enable-uuid unnecessary for GlusterFS support in QEMU)

Update Aug 2013: Starting with QEMU-1.6, pkg-config is used to configure the GlusterFS backend in QEMU. If you are using GlusterFS compiled and installed from sources, then the GlusterFS package config file (glusterfs-api.pc) might not be present at the standard path and you will have to explicitly add the path by executing this command before running the QEMU configure script:

Revision history for this message
Tracy Jones (tjones-i) wrote :

can you retest this with the above information?

Changed in nova:
status: New → Incomplete
Revision history for this message
Sean Dague (sdague) wrote :

Long time incomplete bug

tags: added: gluster
Changed in nova:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.