Summary: can't launch instances since fresh install of Precise and Essex2 - Show stopper.
Precise Alpha-1 AMD64
Essex-2 (Ubuntu repo)
Fresh install (this is a compute node, but instances fail to spawn on controller where nova-compute also runs)
sudo apt-get install qemu unzip nova-compute python-support
/etc/nova/nova.conf
--daemonize=1
--verbose
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--force_dhcp_release
--logdir=/var/log/nova
--state_path=/var/lib/nova
--libvirt_type=kvm
--libvirt_use_virtio_for_bridges
--sql_connection=mysql://root:nova@172.15.0.1/nova
--s3_host=172.15.0.1
--rabbit_host=172.15.0.1
--ec2_host=172.15.0.1
--ec2_url=http://172.15.0.1:8773/services/Cloud
--fixed_range=192.168.0.0/16
--network_size=256
--num_networks=1
--FAKE_subdomain=ec2
--public_interface=eth1
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--image_service=nova.image.glance.GlanceImageService
--glance_api_servers=172.15.0.1:9292
--iscsi_helper=tgtadm
--vlan_start=4025
--vlan_interface=eth0
--auto_assign_floating_ip
Problem: fail to launch Ubuntu image. Nova says successful, but Warning error suggests problem with disk image. Tried Oneiric cloud image and Precise cloud image. I've tried m1.tiny and m1.small to see if there's a difference and still the same outcome. These same steps and config used to work with Oneiric and Diablo.
wget http://uec-images.ubuntu.com/releases/oneiric/release/ubuntu-11.10-server-cloudimg-i386.tar.gz
cloud-publish-tarball ubuntu-11.10-server-cloudimg-i386.tar.gz images i386
euca-run-instances ami-00000002 -k openstack -t m1.tiny
euca-describe-instances
INSTANCE i-00000006 ami-00000002 172.15.1.5 server-6 running openstack (devops, openstack2) 1 m1.tiny 2011-12-21T10:19:35Z nova
euca-get-console-output i-0000006
i-00000006
2011-12-21T10:35:42Z
/var/log/nova/nova-compute.log
2011-12-21 10:19:33,277 DEBUG nova.virt.libvirt_conn [-] instance instance-00000006: starting toXML method from (pid=6760) to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1183
2011-12-21 10:19:33,277 DEBUG nova.virt.libvirt.vif [-] Ensuring vlan 4025 and bridge br4025 from (pid=6760) plug /usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py:82
2011-12-21 10:19:33,278 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_vlan" for method "ensure_vlan"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:750
2011-12-21 10:19:33,278 DEBUG nova.utils [-] Got semaphore "ensure_vlan" for method "ensure_vlan"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:754
2011-12-21 10:19:33,278 DEBUG nova.utils [-] Attempting to grab file lock "ensure_vlan" for method "ensure_vlan"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:758
2011-12-21 10:19:33,279 DEBUG nova.utils [-] Got file lock "ensure_vlan" for method "ensure_vlan"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:769
2011-12-21 10:19:33,279 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev vlan4025 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,300 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_bridge" for method "ensure_bridge"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:750
2011-12-21 10:19:33,301 DEBUG nova.utils [-] Got semaphore "ensure_bridge" for method "ensure_bridge"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:754
2011-12-21 10:19:33,301 DEBUG nova.utils [-] Attempting to grab file lock "ensure_bridge" for method "ensure_bridge"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:758
2011-12-21 10:19:33,302 DEBUG nova.utils [-] Got file lock "ensure_bridge" for method "ensure_bridge"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:769
2011-12-21 10:19:33,302 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev br4025 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,322 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl addif br4025 vlan4025 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,335 DEBUG nova.utils [-] Result was 1 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:214
2011-12-21 10:19:33,335 DEBUG nova.utils [-] Running cmd (subprocess): sudo route -n from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,349 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip addr show dev vlan4025 scope global from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,374 DEBUG nova.virt.libvirt_conn [-] block_device_list [] from (pid=6760) _volume_in_mapping /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1069
2011-12-21 10:19:33,374 DEBUG nova.virt.libvirt_conn [-] block_device_list [] from (pid=6760) _volume_in_mapping /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1069
2011-12-21 10:19:33,486 DEBUG nova.virt.libvirt_conn [-] instance instance-00000006: finished toXML method from (pid=6760) to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1187
2011-12-21 10:19:33,486 INFO nova [-] called setup_basic_filtering in nwfilter
2011-12-21 10:19:33,487 INFO nova [-] ensuring static filters
2011-12-21 10:19:33,835 DEBUG nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x53fd650> from (pid=6760) instance_rules /usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py:650
2011-12-21 10:19:33,835 INFO nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Using cidr '0.0.0.0/0'
2011-12-21 10:19:33,836 INFO nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Using fw_rules: ['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED -j ACCEPT', '-j $provider', u'-s 192.168.15.1 -p udp --sport 67 --dport 68 -j ACCEPT', u'-s 192.168.15.0/26 -j ACCEPT', '-j ACCEPT -p tcp --dport 22 -s 0.0.0.0/0']
2011-12-21 10:19:33,836 DEBUG nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x53fd110> from (pid=6760) instance_rules /usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py:650
2011-12-21 10:19:33,837 INFO nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Using cidr '0.0.0.0/0'
2011-12-21 10:19:33,837 INFO nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Using fw_rules: ['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED -j ACCEPT', '-j $provider', u'-s 192.168.15.1 -p udp --sport 67 --dport 68 -j ACCEPT', u'-s 192.168.15.0/26 -j ACCEPT', '-j ACCEPT -p tcp --dport 22 -s 0.0.0.0/0', '-j ACCEPT -p icmp -s 0.0.0.0/0']
2011-12-21 10:19:33,837 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:750
2011-12-21 10:19:33,838 DEBUG nova.utils [-] Got semaphore "iptables" for method "apply"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:754
2011-12-21 10:19:33,838 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:758
2011-12-21 10:19:33,838 DEBUG nova.utils [-] Got file lock "iptables" for method "apply"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:769
2011-12-21 10:19:33,839 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,855 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,870 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,886 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,901 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /var/lib/nova/instances/instance-00000006/ from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,921 INFO nova.virt.libvirt_conn [-] instance instance-00000006: Creating image
2011-12-21 10:19:33,934 DEBUG nova.virt.libvirt_conn [-] block_device_list [] from (pid=6760) _volume_in_mapping /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1069
2011-12-21 10:19:33,935 DEBUG nova.utils [-] Attempting to grab semaphore "cb7d6887f61381feaa684f7b203baa586f53c02c_sm" for method "call_if_not_exists"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:750
2011-12-21 10:19:33,935 DEBUG nova.utils [-] Got semaphore "cb7d6887f61381feaa684f7b203baa586f53c02c_sm" for method "call_if_not_exists"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:754
2011-12-21 10:20:07,236 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._publish_service_capabilities from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,237 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Notifying Schedulers of capabilities ... from (pid=6760) _publish_service_capabilities /usr/lib/python2.7/dist-packages/nova/manager.py:193
2011-12-21 10:20:07,238 DEBUG nova.rpc [a66206a2-7745-423a-9cab-c14ade429be4 None None] Making asynchronous fanout cast... from (pid=6760) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:763
2011-12-21 10:20:07,493 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._poll_rescued_instances from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,494 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._sync_power_states from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,854 INFO nova.compute.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Found 1 in the database and 0 on the hypervisor.
2011-12-21 10:20:07,854 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,855 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._poll_rebooting_instances from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,855 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,855 DEBUG nova.compute.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=6760) _reclaim_queued_deletes /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1947
2011-12-21 10:20:07,856 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._report_driver_status from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,856 INFO nova.compute.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Updating host status
2011-12-21 10:20:07,856 DEBUG nova.virt.libvirt_conn [a66206a2-7745-423a-9cab-c14ade429be4 None None] Updating host stats from (pid=6760) update_status /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1918
2011-12-21 10:20:08,969 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:08,970 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._publish_service_capabilities from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:08,970 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Notifying Schedulers of capabilities ... from (pid=6760) _publish_service_capabilities /usr/lib/python2.7/dist-packages/nova/manager.py:193
2011-12-21 10:21:08,971 DEBUG nova.rpc [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Making asynchronous fanout cast... from (pid=6760) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:763
2011-12-21 10:21:09,289 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._poll_rescued_instances from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,289 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._sync_power_states from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,885 INFO nova.compute.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Found 1 in the database and 0 on the hypervisor.
2011-12-21 10:21:09,886 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,886 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._poll_rebooting_instances from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,886 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,887 DEBUG nova.compute.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=6760) _reclaim_queued_deletes /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1947
2011-12-21 10:21:09,887 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._report_driver_status from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,888 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:41,718 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/cb7d6887f61381feaa684f7b203baa586f53c02c_sm /var/lib/nova/instances/instance-00000006/disk from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,869 INFO nova.virt.libvirt_conn [78db929e-72d6-47b6-a663-c435d1b5316e None None] instance instance-00000006: injecting key into image 97bb58ab-a17e-4611-a8de-875064ebeef0
2011-12-21 10:21:41,870 DEBUG nova.utils [78db929e-72d6-47b6-a663-c435d1b5316e None None] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-00000006/disk from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,907 DEBUG nova.utils [78db929e-72d6-47b6-a663-c435d1b5316e None None] Running cmd (subprocess): sudo kpartx -a /dev/nbd15 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,925 DEBUG nova.utils [78db929e-72d6-47b6-a663-c435d1b5316e None None] Running cmd (subprocess): sudo kpartx -d /dev/nbd15 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,940 DEBUG nova.utils [78db929e-72d6-47b6-a663-c435d1b5316e None None] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,969 WARNING nova.virt.libvirt_conn [78db929e-72d6-47b6-a663-c435d1b5316e None None] instance instance-00000006: ignoring error injecting data into image 97bb58ab-a17e-4611-a8de-875064ebeef0 (Mapped device was not found (we can only inject raw disk images): /dev/mapper/nbd15p1)
2011-12-21 10:21:43,391 DEBUG nova.virt.libvirt_conn [-] instance instance-00000006: is running from (pid=6760) spawn /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:670
2011-12-21 10:21:43,392 DEBUG nova.compute.manager [-] Checking state of instance-00000006 from (pid=6760) _get_power_state /usr/lib/python2.7/dist-packages/nova/compute/manager.py:192
2011-12-21 10:21:44,043 INFO nova.virt.libvirt_conn [-] Instance instance-00000006 spawned successfully.
Given the error message, could be a Precise issue. What was your last known-good config ?