LVM backed VM fails to launch

Bug #1373962 reported by Dan Genin
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
High
Unassigned

Bug Description

LVM ephemeral storage backend is broken in the most recent Nova (commit 945646e1298a53be6ae284766f5023d754dfe57d)

To reproduce in Devstack:

1. Configure Nova to use LVM ephemeral storage by adding to create_nova_conf function in lib/nova

        iniset $NOVA_CONF libvirt images_type "lvm"
        iniset $NOVA_CONF libvirt images_volume_group "nova-lvm"

2. Create a backing file for LVM

        truncate -s 5G nova-backing-file

3. Mount the file via loop device

        sudo losetup /dev/loop0 nova-backing-file

4. Create nova-lvm volume group

        sudo vgcreate nova-lvm /dev/loop0

5. Launch Devstack

6. Alternatively, skipping step 1, /etc/nova/nova.conf can be modified after Devstack is launched by adding

        [libvirt]
        images_type = lvm
        images_volume_group = nova-lvm

    and then restarting nova-compute by entering the Devstack screen session, going to the n-cpu screen and hitting Ctrl-C, Up-arrow, and Enter.

7. Launch an instance

        nova boot test --flavor 1 --image cirros-0.3.2-x86_64-uec

Instance fails to launch. Nova compute reports

2014-09-25 10:11:08.180 ERROR nova.compute.manager [req-b7924ad0-5f4b-46eb-a798-571d97c77145 demo demo] [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] Instance failed to spawn
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] Traceback (most recent call last):
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/opt/stack/nova/nova/compute/manager.py", line 2231, in _build_resources
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] yield resources
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/opt/stack/nova/nova/compute/manager.py", line 2101, in _build_and_run_instance
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] block_device_info=block_device_info)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2617, in spawn
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] block_device_info, disk_info=disk_info)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4434, in _create_domain_and_network
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] domain.destroy()
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4358, in _create_domain
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] for vif in network_info if vif.get('active', True) is False]
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] six.reraise(self.type_, self.value, self.tb)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4349, in _create_domain
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] raise exception.VirtualInterfaceCreateException()
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in proxy_call
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] rv = execute(f, *args, **kwargs)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] six.reraise(c, e, tb)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] rv = meth(*args, **kwargs)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] File "/usr/lib/python2.7/dist-packages/libvirt.py", line 711, in createWithFlags
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] libvirtError: cannot open file '/dev/nova-lvm/8d69f0de-253a-403e-a137-0da77b0d415c_disk.config': No such file or directory
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 8d69f0de-253a-403e-a137-0da77b0d415c]
2014-09-25 10:11:08.218 AUDIT nova.compute.manager [req-b7924ad0-5f4b-46eb-a798-571d97c77145 demo demo] [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] Terminating instance
2014-09-25 10:11:08.228 29649 INFO nova.virt.libvirt.driver [-] [instance: 8d69f0de-253a-403e-a137-0da77b0d415c] Instance destroyed successfully.

Tags: libvirt lvm
Revision history for this message
Sean Dague (sdague) wrote :

The commit hash doesn't seem to be valid, can you provide a link to the review?

Changed in nova:
status: New → Incomplete
milestone: juno-rc1 → none
Revision history for this message
Sean Dague (sdague) wrote :

Can you provide your entire nova.conf? I'm curious if there is other interaction with the ephemeral key work.

Sean Dague (sdague)
tags: added: libvirt
tags: removed: backed instance nova
Changed in nova:
status: Incomplete → Confirmed
Revision history for this message
Dan Genin (daniel-genin) wrote :

Here is the git log excerpt for the commit

commit 945646e1298a53be6ae284766f5023d754dfe57d
Merge: f3c99ba 4adc837
Author: Jenkins <email address hidden>
Date: Wed Sep 24 17:59:03 2014 +0000

    Merge "Sync network_info if instance not found before _build_resources yield"

Revision history for this message
Dan Genin (daniel-genin) wrote :

My nova.conf

[libvirt]
inject_partition = -2
use_usb_tablet = False
cpu_mode = none
virt_type = kvm
images_volume_group = nova-lvm
images_type = lvm

[DEFAULT]
flat_interface = eth0
flat_network_bridge = br100
vlan_interface = eth0
public_interface = br100
network_manager = nova.network.manager.FlatDHCPManager
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
compute_driver = libvirt.LibvirtDriver
default_ephemeral_format = ext4
rabbit_password = password
rabbit_hosts = 192.168.123.2
rpc_backend = nova.openstack.common.rpc.impl_kombu
ec2_dmz_host = 192.168.123.2
vncserver_proxyclient_address = 127.0.0.1
vncserver_listen = 127.0.0.1
vnc_enabled = true
xvpvncproxy_base_url = http://192.168.123.2:6081/console
novncproxy_base_url = http://192.168.123.2:6080/vnc_auto.html
logging_context_format_string = %(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s
force_config_drive = always
use_syslog = True
instances_path = /opt/stack/data/nova/instances
lock_path = /opt/stack/data/nova
state_path = /opt/stack/data/nova
enabled_apis = ec2,osapi_compute,metadata
instance_name_template = instance-%08x
sql_connection = mysql://root:password@127.0.0.1/nova?charset=utf8
metadata_workers = 1
ec2_workers = 0
osapi_compute_workers = 1
my_ip = 192.168.123.2
s3_port = 3333
s3_host = 192.168.123.2
default_floating_pool = public
fixed_range =
force_dhcp_release = True
dhcpbridge_flagfile = /etc/nova/nova.conf
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
rootwrap_config = /etc/nova/rootwrap.conf
api_paste_config = /etc/nova/api-paste.ini
allow_resize_to_same_host = True
auth_strategy = keystone
debug = True
verbose = True

[conductor]
workers = 1

[osapi_v3]
enabled = True

[keystone_authtoken]
signing_dir = /var/cache/nova
admin_password = password
admin_user = nova
cafile =
admin_tenant_name = service
identity_uri = http://192.168.123.2:35357

[spice]
enabled = false
html5proxy_base_url = http://192.168.123.2:6082/spice_auto.html

[glance]
api_servers = 192.168.123.2:9292

[keymgr]
fixed_key = FC00478BB84CC84C86B8200D5B96378C4BB174FEA6354C106D2CE1CD6F131CD3

Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
Sean Dague (sdague) wrote :
Changed in nova:
milestone: none → juno-rc1
Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
Dan Genin (daniel-genin) wrote :
Changed in nova:
status: Confirmed → Fix Committed
Thierry Carrez (ttx)
Changed in nova:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: juno-rc1 → 2014.2
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.