cloudservers boot fails on ubuntu/xenserver

Bug #710882 reported by Wayne A. Walls
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Glance
Invalid
Undecided
Unassigned

Bug Description

Running XenServer 5.6 w/ a Ubuntu 10.04 VM installed. Followed instructions from http://wiki.openstack.org/XenServerDevelopment, with some minor differences (using nova packages instead of running services in sceen sessions mainly).

When I try 'cloudservers boot test --flavor=1 --image=3' or 'cloudservers boot --flavor=1 --image=3 test' I get the following error from nova-compute.log:

#####
root@xbuntu01:~/openstack/nova/plugins/xenserver/xenapi# tail -f /var/log/nova/nova-compute.log
2011-01-31 12:54:43,472 DEBUG nova.virt.xenapi.vm_utils [-] Size for image 3:524288000 from MainProcess (pid=16194) _fetch_image_glance /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:317
2011-01-31 12:54:44,094 DEBUG nova.virt.xenapi.vm_utils [-] Created VDI OpaqueRef:c33f4da3-d30f-ab6a-b49d-c7a76774cf22 (Glance image 3, 524320256, False) on OpaqueRef:126eaa52-6b27-9755-5075-56c616f64997. from MainProcess (pid=16194) create_vdi /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:238
2011-01-31 12:54:44,143 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:c33f4da3-d30f-ab6a-b49d-c7a76774cf22 ... from MainProcess (pid=16194) with_vdi_attached_here /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:635
2011-01-31 12:54:44,192 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:c33f4da3-d30f-ab6a-b49d-c7a76774cf22 done. from MainProcess (pid=16194) with_vdi_attached_here /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:637
2011-01-31 12:54:44,192 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:cf2107ec-18e5-3119-0447-ea91d19ddf2f ... from MainProcess (pid=16194) with_vdi_attached_here /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:639
2011-01-31 12:54:45,537 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:cf2107ec-18e5-3119-0447-ea91d19ddf2f done. from MainProcess (pid=16194) with_vdi_attached_here /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:641
2011-01-31 12:54:45,583 DEBUG nova.virt.xenapi.vm_utils [-] VBD OpaqueRef:cf2107ec-18e5-3119-0447-ea91d19ddf2f plugged as xvda from MainProcess (pid=16194) with_vdi_attached_here /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:643
2011-01-31 12:54:45,592 DEBUG nova.virt.xenapi.vm_utils [-] VBD OpaqueRef:cf2107ec-18e5-3119-0447-ea91d19ddf2f plugged into wrong dev, remapping to sda from MainProcess (pid=16194) with_vdi_attached_here /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:647
2011-01-31 12:54:45,592 DEBUG nova.virt.xenapi.vm_utils [-] Writing partition table 63 1024062 to /dev/sda... from MainProcess (pid=16194) _write_partition /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:718
2011-01-31 12:54:45,654 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:c33f4da3-d30f-ab6a-b49d-c7a76774cf22 ... from MainProcess (pid=16194) with_vdi_attached_here /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:650

2011-01-31 13:14:45,726 ERROR nova.virt.xenapi.vm_utils [-] Ignoring XenAPI.Failure in VBD.unplug: ['INTERNAL_ERROR', 'Watch.Timeout(1200.)']
2011-01-31 13:14:45,776 ERROR nova.virt.xenapi.vm_utils [-] Ignoring XenAPI.Failure ['OPERATION_NOT_ALLOWED', "VBD 'ad6161df-61bf-02ee-7fcc-b7d3e43fb1f5' still attached to 'cedbe6db-75ce-235f-9b69-137a64c49506'"]
2011-01-31 13:14:45,777 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:c33f4da3-d30f-ab6a-b49d-c7a76774cf22 done. from MainProcess (pid=16194) with_vdi_attached_here /usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py:653
2011-01-31 13:14:45,777 ERROR nova.compute.manager [NJJM9HO0N0COHDXWXXY8 dub openstack] instance 8: Failed to spawn
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 211, in run_instance
(nova.compute.manager): TRACE: self.driver.spawn(instance_ref)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi_conn.py", line 157, in spawn
(nova.compute.manager): TRACE: self._vmops.spawn(instance)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vmops.py", line 83, in spawn
(nova.compute.manager): TRACE: instance.image_id, user, project, disk_image_type)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 301, in fetch_image
(nova.compute.manager): TRACE: access, type)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 326, in _fetch_image_glance
(nova.compute.manager): TRACE: lambda dev:
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 648, in with_vdi_attached_here
(nova.compute.manager): TRACE: return f(dev)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 328, in <lambda>
(nova.compute.manager): TRACE: virtual_size, image_file))
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 703, in _stream_disk
(nova.compute.manager): TRACE: _write_partition(virtual_size, dev)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 725, in _write_partition
(nova.compute.manager): TRACE: execute('parted --script %s mklabel msdos' % dest)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vm_utils.py", line 723, in execute
(nova.compute.manager): TRACE: check_exit_code=check_exit_code)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/utils.py", line 147, in execute
(nova.compute.manager): TRACE: cmd=cmd)
(nova.compute.manager): TRACE: ProcessExecutionError: Unexpected error while running command.
(nova.compute.manager): TRACE: Command: parted --script /dev/sda mklabel msdos
(nova.compute.manager): TRACE: Exit code: 1
(nova.compute.manager): TRACE: Stdout: 'Error: Could not stat device /dev/sda - No such file or directory.\n'
(nova.compute.manager): TRACE: Stderr: ''
(nova.compute.manager): TRACE:
2011-01-31 13:14:45,953 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 116, in _wrap
(nova.exception): TRACE: return f(*args, **kw)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 223, in run_instance
(nova.exception): TRACE: self._update_state(context, instance_id)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 127, in _update_state
(nova.exception): TRACE: info = self.driver.get_info(instance_ref['name'])
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi_conn.py", line 193, in get_info
(nova.exception): TRACE: return self._vmops.get_info(instance_id)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vmops.py", line 357, in get_info
(nova.exception): TRACE: vm = self._get_vm_opaque_ref(instance)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/xenapi/vmops.py", line 164, in _get_vm_opaque_ref
(nova.exception): TRACE: raise Exception(_('Instance not present %s') % instance_name)
(nova.exception): TRACE: Exception: Instance not present instance-00000008
(nova.exception): TRACE:
2011-01-31 13:14:45,958 ERROR nova.root [-] Exception during message handling
(nova.root): TRACE: Traceback (most recent call last):
(nova.root): TRACE: File "/usr/lib/pymodules/python2.6/nova/rpc.py", line 192, in receive
(nova.root): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.root): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 122, in _wrap
(nova.root): TRACE: raise Error(str(e))
(nova.root): TRACE: Error: Instance not present instance-00000008
(nova.root): TRACE:
#####

Server shows up as 'active', but no IP info.

root@xbuntu01:~# cloudservers list
+----+------+--------+-----------+------------+
| ID | Name | Status | Public IP | Private IP |
+----+------+--------+-----------+------------+
| 8 | test | active | | |
+----+------+--------+-----------+------------+

#####

root@xbuntu01:~# cloudservers image-list
+----+---------+--------+
| ID | Name | Status |
+----+---------+--------+
| 1 | ramdisk | active |
| 2 | kernel | active |
| 3 | machine | active |
+----+---------+--------+

#####

root@xbuntu01:~# cloudservers flavor-list
+----+-----------+-------+------+
| ID | Name | RAM | Disk |
+----+-----------+-------+------+
| 1 | m1.tiny | 512 | 0 |
| 2 | m1.small | 2048 | 20 |
| 3 | m1.medium | 4096 | 40 |
| 4 | m1.large | 8192 | 80 |
| 5 | m1.xlarge | 16384 | 160 |
+----+-----------+-------+------+

#####

Not sure what else to try, the storage is showing up in XenCenter, so the communication is occurring there.

Cheers

Revision history for this message
Matt Dietz (cerberus) wrote :

Where are you running compute? For XenServer, it needs to be running inside a domU on the host.

Revision history for this message
Wayne A. Walls (wayne-walls) wrote :

That is where compute is running, inside a Ubuntu 10.04 LTS install

Jay Pipes (jaypipes)
Changed in glance:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.