Mapped device was not found (we can only inject raw disk images): /dev/mapper/nbd15p1) : Precise A1 and E2

Bug #907269 reported by Kevin Jackson
18
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

Summary: can't launch instances since fresh install of Precise and Essex2 - Show stopper.

Precise Alpha-1 AMD64
Essex-2 (Ubuntu repo)

Fresh install (this is a compute node, but instances fail to spawn on controller where nova-compute also runs)

sudo apt-get install qemu unzip nova-compute python-support

/etc/nova/nova.conf
--daemonize=1
--verbose
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--force_dhcp_release
--logdir=/var/log/nova
--state_path=/var/lib/nova
--libvirt_type=kvm
--libvirt_use_virtio_for_bridges
--sql_connection=mysql://root:nova@172.15.0.1/nova
--s3_host=172.15.0.1
--rabbit_host=172.15.0.1
--ec2_host=172.15.0.1
--ec2_url=http://172.15.0.1:8773/services/Cloud
--fixed_range=192.168.0.0/16
--network_size=256
--num_networks=1
--FAKE_subdomain=ec2
--public_interface=eth1
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--image_service=nova.image.glance.GlanceImageService
--glance_api_servers=172.15.0.1:9292
--iscsi_helper=tgtadm
--vlan_start=4025
--vlan_interface=eth0
--auto_assign_floating_ip

Problem: fail to launch Ubuntu image. Nova says successful, but Warning error suggests problem with disk image. Tried Oneiric cloud image and Precise cloud image. I've tried m1.tiny and m1.small to see if there's a difference and still the same outcome. These same steps and config used to work with Oneiric and Diablo.

wget http://uec-images.ubuntu.com/releases/oneiric/release/ubuntu-11.10-server-cloudimg-i386.tar.gz
cloud-publish-tarball ubuntu-11.10-server-cloudimg-i386.tar.gz images i386
euca-run-instances ami-00000002 -k openstack -t m1.tiny

euca-describe-instances
INSTANCE i-00000006 ami-00000002 172.15.1.5 server-6 running openstack (devops, openstack2) 1 m1.tiny 2011-12-21T10:19:35Z nova

euca-get-console-output i-0000006
i-00000006
2011-12-21T10:35:42Z

/var/log/nova/nova-compute.log

2011-12-21 10:19:33,277 DEBUG nova.virt.libvirt_conn [-] instance instance-00000006: starting toXML method from (pid=6760) to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1183
2011-12-21 10:19:33,277 DEBUG nova.virt.libvirt.vif [-] Ensuring vlan 4025 and bridge br4025 from (pid=6760) plug /usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py:82
2011-12-21 10:19:33,278 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_vlan" for method "ensure_vlan"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:750
2011-12-21 10:19:33,278 DEBUG nova.utils [-] Got semaphore "ensure_vlan" for method "ensure_vlan"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:754
2011-12-21 10:19:33,278 DEBUG nova.utils [-] Attempting to grab file lock "ensure_vlan" for method "ensure_vlan"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:758
2011-12-21 10:19:33,279 DEBUG nova.utils [-] Got file lock "ensure_vlan" for method "ensure_vlan"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:769
2011-12-21 10:19:33,279 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev vlan4025 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,300 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_bridge" for method "ensure_bridge"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:750
2011-12-21 10:19:33,301 DEBUG nova.utils [-] Got semaphore "ensure_bridge" for method "ensure_bridge"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:754
2011-12-21 10:19:33,301 DEBUG nova.utils [-] Attempting to grab file lock "ensure_bridge" for method "ensure_bridge"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:758
2011-12-21 10:19:33,302 DEBUG nova.utils [-] Got file lock "ensure_bridge" for method "ensure_bridge"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:769
2011-12-21 10:19:33,302 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev br4025 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,322 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl addif br4025 vlan4025 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,335 DEBUG nova.utils [-] Result was 1 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:214
2011-12-21 10:19:33,335 DEBUG nova.utils [-] Running cmd (subprocess): sudo route -n from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,349 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip addr show dev vlan4025 scope global from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,374 DEBUG nova.virt.libvirt_conn [-] block_device_list [] from (pid=6760) _volume_in_mapping /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1069
2011-12-21 10:19:33,374 DEBUG nova.virt.libvirt_conn [-] block_device_list [] from (pid=6760) _volume_in_mapping /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1069
2011-12-21 10:19:33,486 DEBUG nova.virt.libvirt_conn [-] instance instance-00000006: finished toXML method from (pid=6760) to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1187
2011-12-21 10:19:33,486 INFO nova [-] called setup_basic_filtering in nwfilter
2011-12-21 10:19:33,487 INFO nova [-] ensuring static filters
2011-12-21 10:19:33,835 DEBUG nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x53fd650> from (pid=6760) instance_rules /usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py:650
2011-12-21 10:19:33,835 INFO nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Using cidr '0.0.0.0/0'
2011-12-21 10:19:33,836 INFO nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Using fw_rules: ['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED -j ACCEPT', '-j $provider', u'-s 192.168.15.1 -p udp --sport 67 --dport 68 -j ACCEPT', u'-s 192.168.15.0/26 -j ACCEPT', '-j ACCEPT -p tcp --dport 22 -s 0.0.0.0/0']
2011-12-21 10:19:33,836 DEBUG nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x53fd110> from (pid=6760) instance_rules /usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py:650
2011-12-21 10:19:33,837 INFO nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Using cidr '0.0.0.0/0'
2011-12-21 10:19:33,837 INFO nova.virt.libvirt.firewall [ba125265-2f49-444e-a522-f20ddb079527 None None] Using fw_rules: ['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED -j ACCEPT', '-j $provider', u'-s 192.168.15.1 -p udp --sport 67 --dport 68 -j ACCEPT', u'-s 192.168.15.0/26 -j ACCEPT', '-j ACCEPT -p tcp --dport 22 -s 0.0.0.0/0', '-j ACCEPT -p icmp -s 0.0.0.0/0']
2011-12-21 10:19:33,837 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:750
2011-12-21 10:19:33,838 DEBUG nova.utils [-] Got semaphore "iptables" for method "apply"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:754
2011-12-21 10:19:33,838 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:758
2011-12-21 10:19:33,838 DEBUG nova.utils [-] Got file lock "iptables" for method "apply"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:769
2011-12-21 10:19:33,839 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,855 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,870 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,886 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,901 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /var/lib/nova/instances/instance-00000006/ from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:19:33,921 INFO nova.virt.libvirt_conn [-] instance instance-00000006: Creating image
2011-12-21 10:19:33,934 DEBUG nova.virt.libvirt_conn [-] block_device_list [] from (pid=6760) _volume_in_mapping /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1069
2011-12-21 10:19:33,935 DEBUG nova.utils [-] Attempting to grab semaphore "cb7d6887f61381feaa684f7b203baa586f53c02c_sm" for method "call_if_not_exists"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:750
2011-12-21 10:19:33,935 DEBUG nova.utils [-] Got semaphore "cb7d6887f61381feaa684f7b203baa586f53c02c_sm" for method "call_if_not_exists"... from (pid=6760) inner /usr/lib/python2.7/dist-packages/nova/utils.py:754
2011-12-21 10:20:07,236 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._publish_service_capabilities from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,237 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Notifying Schedulers of capabilities ... from (pid=6760) _publish_service_capabilities /usr/lib/python2.7/dist-packages/nova/manager.py:193
2011-12-21 10:20:07,238 DEBUG nova.rpc [a66206a2-7745-423a-9cab-c14ade429be4 None None] Making asynchronous fanout cast... from (pid=6760) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:763
2011-12-21 10:20:07,493 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._poll_rescued_instances from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,494 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._sync_power_states from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,854 INFO nova.compute.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Found 1 in the database and 0 on the hypervisor.
2011-12-21 10:20:07,854 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,855 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._poll_rebooting_instances from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,855 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,855 DEBUG nova.compute.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=6760) _reclaim_queued_deletes /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1947
2011-12-21 10:20:07,856 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._report_driver_status from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:20:07,856 INFO nova.compute.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Updating host status
2011-12-21 10:20:07,856 DEBUG nova.virt.libvirt_conn [a66206a2-7745-423a-9cab-c14ade429be4 None None] Updating host stats from (pid=6760) update_status /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1918
2011-12-21 10:20:08,969 DEBUG nova.manager [a66206a2-7745-423a-9cab-c14ade429be4 None None] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:08,970 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._publish_service_capabilities from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:08,970 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Notifying Schedulers of capabilities ... from (pid=6760) _publish_service_capabilities /usr/lib/python2.7/dist-packages/nova/manager.py:193
2011-12-21 10:21:08,971 DEBUG nova.rpc [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Making asynchronous fanout cast... from (pid=6760) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:763
2011-12-21 10:21:09,289 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._poll_rescued_instances from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,289 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._sync_power_states from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,885 INFO nova.compute.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Found 1 in the database and 0 on the hypervisor.
2011-12-21 10:21:09,886 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,886 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._poll_rebooting_instances from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,886 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,887 DEBUG nova.compute.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=6760) _reclaim_queued_deletes /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1947
2011-12-21 10:21:09,887 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._report_driver_status from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:09,888 DEBUG nova.manager [b3348ba3-5a01-4a0f-888c-55ec298b64b9 None None] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=6760) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2011-12-21 10:21:41,718 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/cb7d6887f61381feaa684f7b203baa586f53c02c_sm /var/lib/nova/instances/instance-00000006/disk from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,869 INFO nova.virt.libvirt_conn [78db929e-72d6-47b6-a663-c435d1b5316e None None] instance instance-00000006: injecting key into image 97bb58ab-a17e-4611-a8de-875064ebeef0
2011-12-21 10:21:41,870 DEBUG nova.utils [78db929e-72d6-47b6-a663-c435d1b5316e None None] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-00000006/disk from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,907 DEBUG nova.utils [78db929e-72d6-47b6-a663-c435d1b5316e None None] Running cmd (subprocess): sudo kpartx -a /dev/nbd15 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,925 DEBUG nova.utils [78db929e-72d6-47b6-a663-c435d1b5316e None None] Running cmd (subprocess): sudo kpartx -d /dev/nbd15 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,940 DEBUG nova.utils [78db929e-72d6-47b6-a663-c435d1b5316e None None] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=6760) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 10:21:41,969 WARNING nova.virt.libvirt_conn [78db929e-72d6-47b6-a663-c435d1b5316e None None] instance instance-00000006: ignoring error injecting data into image 97bb58ab-a17e-4611-a8de-875064ebeef0 (Mapped device was not found (we can only inject raw disk images): /dev/mapper/nbd15p1)
2011-12-21 10:21:43,391 DEBUG nova.virt.libvirt_conn [-] instance instance-00000006: is running from (pid=6760) spawn /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:670
2011-12-21 10:21:43,392 DEBUG nova.compute.manager [-] Checking state of instance-00000006 from (pid=6760) _get_power_state /usr/lib/python2.7/dist-packages/nova/compute/manager.py:192
2011-12-21 10:21:44,043 INFO nova.virt.libvirt_conn [-] Instance instance-00000006 spawned successfully.

Related branches

Revision history for this message
Thierry Carrez (ttx) wrote :

Given the error message, could be a Precise issue. What was your last known-good config ?

Changed in nova:
status: New → Incomplete
Revision history for this message
Kevin Jackson (kevin-linuxservices) wrote :

My last known good config was

Ubuntu 11.10
Diablo 2011.3~3 (oneiric-proposed)

I'll reinstall 11.10 and install E2 and report back.

Revision history for this message
Kevin Jackson (kevin-linuxservices) wrote :

Hmm - wasn't expecting this - fails on Ubuntu 11.10 with nova-core from milestone PPA.

Ubuntu 11.10 Fresh Install with latest updates.

apt-get install -y python-software-properties
add-apt-repository ppa:nova-core/milestone
apt-get install -y rabbitmq-server nova-api nova-objectstore nova-scheduler nova-network nova-compute glance qemu unzip python-support

Same nova.conf as before
Created network configs (private and floating)
Used Oneiric cloud image
cloud-publish-tarball ...

euca-run-instances ami-00000002 -k openstack -t m1.tiny

Says spawned successful - but instance hasn't been so successful.
euca-get-console-output empty (apart from beginning identifier)
Same warning (I'm presuming the warning is the crux - I've not noticed it before)

011-12-21 12:41:47,059 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0_sm /var/lib/nova/instances/instance-00000001/disk from (pid=9308) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 12:41:47,553 INFO nova.virt.libvirt_conn [db1901eb-7181-489e-9051-0f303a1e5a5f None None] instance instance-00000001: injecting key into image 2
2011-12-21 12:41:47,554 DEBUG nova.utils [db1901eb-7181-489e-9051-0f303a1e5a5f None None] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-00000001/disk from (pid=9308) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 12:41:48,809 DEBUG nova.utils [db1901eb-7181-489e-9051-0f303a1e5a5f None None] Running cmd (subprocess): sudo kpartx -a /dev/nbd15 from (pid=9308) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 12:41:48,916 DEBUG nova.utils [db1901eb-7181-489e-9051-0f303a1e5a5f None None] Running cmd (subprocess): sudo kpartx -d /dev/nbd15 from (pid=9308) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 12:41:48,929 DEBUG nova.utils [db1901eb-7181-489e-9051-0f303a1e5a5f None None] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=9308) execute /usr/lib/python2.7/dist-packages/nova/utils.py:198
2011-12-21 12:41:48,954 WARNING nova.virt.libvirt_conn [db1901eb-7181-489e-9051-0f303a1e5a5f None None] instance instance-00000001: ignoring error injecting data into image 2 (Mapped device was not found (we can only inject raw disk images): /dev/mapper/nbd15p1)
2011-12-21 12:41:51,260 DEBUG nova.virt.libvirt_conn [-] instance instance-00000001: is running from (pid=9308) spawn /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:670

Revision history for this message
Kevin Jackson (kevin-linuxservices) wrote :

(there was an apt-get update after the add-apt...)

Revision history for this message
Kevin Jackson (kevin-linuxservices) wrote :

Noticed this in dmesg

[ 2221.217072] nbd15: unknown partition table
[ 2222.262990] nbd15: NBD_DISCONNECT
[ 2222.263089] nbd15: Receive control failed (result -32)
[ 2222.267259] nbd15: queue cleared
[ 2223.122312] type=1400 audit(1324471309.796:15): apparmor="DENIED" operation="open" parent=5629 profile="/usr/lib/libvirt/virt-aa-helper" name="/var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0_sm" pid=10338 comm="virt-aa-helper" requested_mask="r" denied_mask="r" fsuid=0 ouid=108
[ 2223.536709] type=1400 audit(1324471310.212:16): apparmor="STATUS" operation="profile_load" name="libvirt-fcf9aaa0-2f5b-122a-63de-8766a9f57f79" pid=10339 comm="apparmor_parser"

Revision history for this message
Kevin Jackson (kevin-linuxservices) wrote :

Attaching to VNC says:

Booting from Hard DIsk...
Boot failed: not a bootable disk

I'm trying an alternative home-grown image, as opposed to the Ubuntu ones to see if that's the issue (big assumption its not that) then I'll go back to a good known config to rule out hardware/my process as being the issue.

Revision history for this message
Thierry Carrez (ttx) wrote :

You can also try registering the .img (from the cloud-images website) directly through Glance and see if it yields better results.

Revision history for this message
Kevin Jackson (kevin-linuxservices) wrote :

I've tried going back to good known config:

Ubuntu 11.10
Diablo (with packages from proposed)

And all is well. Will re-install 11.10 with E2 and try again.

Revision history for this message
chiru (mcirauqui) wrote :
Download full text (17.5 KiB)

Same issue running Ubuntu 11.10 and nova and glance from milestone ppa (2012.1~e2-0ubuntu0~ppa1~oneiric1).

/var/log/nova/nova-compute.log

2012-01-04 09:39:52,453 DEBUG nova.manager [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] Notifying Schedulers of capabilities ... from (pid=27900) _publish_service_capabilities /usr/lib/python2.7/dist-packages/nova/manager.py:193
2012-01-04 09:39:52,454 DEBUG nova.rpc [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] Making asynchronous fanout cast... from (pid=27900) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:763
2012-01-04 09:39:52,459 DEBUG nova.manager [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] Running periodic task ComputeManager._poll_rescued_instances from (pid=27900) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2012-01-04 09:39:52,460 DEBUG nova.manager [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] Running periodic task ComputeManager._sync_power_states from (pid=27900) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2012-01-04 09:39:53,284 DEBUG nova.manager [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=27900) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2012-01-04 09:39:53,284 DEBUG nova.manager [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] Running periodic task ComputeManager._poll_rebooting_instances from (pid=27900) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2012-01-04 09:39:53,285 DEBUG nova.manager [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=27900) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2012-01-04 09:39:53,285 DEBUG nova.compute.manager [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=27900) _reclaim_queued_deletes /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1947
2012-01-04 09:39:53,285 DEBUG nova.manager [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] Running periodic task ComputeManager._report_driver_status from (pid=27900) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2012-01-04 09:39:53,285 DEBUG nova.manager [8ab0196c-affd-4d98-bfb9-ee77b0556d0b None None] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=27900) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:151
2012-01-04 09:40:13,120 DEBUG nova.rpc [-] received {u'_context_roles': [], u'_context_request_id': u'024c2b72-0920-44b5-bdf8-e08f673fad13', u'_context_read_deleted': u'no', u'args': {u'instance_uuid': u'cb3b5a38-dba5-4c8a-8bca-3e94b2b2bcaa', u'requested_networks': None, u'admin_password': None, u'injected_files': None}, u'_context_auth_token': None, u'_context_strategy': u'noauth', u'_context_is_admin': True, u'_context_project_id': u'novaproject', u'_context_timestamp': u'2012-01-04T08:40:12.510013', u'_context_user_id': u'9c2118fe-511e-4b15-bc9f-a0f51f72c38f', u'method': u'run_instance', u'_context_remote_address': u'127.0.0.1'} from (pid=27900) __call__ /usr/lib/python2.7/dist-packages/no...

Revision history for this message
Pádraig Brady (p-draigbrady) wrote :

Note that message (which will be different in Essex-2), is not the real issue I think.
It's throwing that because it's looking for the first partition, and the image is probably partitionless.
One can see in virt/libvirt/connection.py that a partition="1" is hardcoded for all kernel images.
So I'm guessing that the kernel image was not registered correctly

Revision history for this message
Pádraig Brady (p-draigbrady) wrote :

In comment #10 s/all kernel images/all images with no associated kernel image/
I should also not that failure to inject seh keys etc. should not stop the instance from starting

Revision history for this message
Patrick Hetu (patrick-hetu) wrote :

I have the same problem here with the last version of nova of the openstack-trunk-testing PPA. I also have some nova-rootwrap problem but that's an other issue.

I have added a raw partition in glance and then started a new lxc container with the raw image and the command:

  sudo nova-rootwrap mount /dev/nbd15 /tmp/tmp0fmlKu

is failing because the partition is /dev/nbd15p1 So mayby device + 'p1' should be hard coded.

the logs:

2012-02-09 16:31:29,375 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/ac44f323e1cbe558388b540e3d24f6490350099f_sm /var/lib/nova/instances/instance-0000000b/disk from (pid=10441) execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2012-02-09 16:31:29,532 INFO nova.virt.libvirt_conn [-] [instance: d29a0728-9dc2-43d8-b22a-27e085d0d4bd] Injecting key into image a791bb55-7bc9-4e0d-a8d6-41bdf03c5201
2012-02-09 16:31:29,533 DEBUG nova.utils [-] Running cmd (subprocess): sudo nova-rootwrap qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-0000000b/disk from (pid=10441) execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2012-02-09 16:31:29,616 DEBUG nova.utils [-] Running cmd (subprocess): sudo nova-rootwrap mount /dev/nbd15 /tmp/tmp0fmlKu from (pid=10441) execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2012-02-09 16:31:29,787 DEBUG nova.utils [-] Result was 32 from (pid=10441) execute /usr/lib/python2.7/dist-packages/nova/utils.py:224
2012-02-09 16:31:29,788 DEBUG nova.utils [-] Unexpected error while running command.
Command: sudo nova-rootwrap mount /dev/nbd15 /tmp/tmp0fmlKu
Exit code: 32
Stdout: ''
Stderr: 'mount\xc2\xa0: vous devez indiquer le type de syst\xc3\xa8me de fichiers\n' from (pid=10441) trycmd /usr/lib/python2.7/dist-packages/nova/utils.py:267

Revision history for this message
Patrick Hetu (patrick-hetu) wrote :

I've test hard coding the partition number to 1 in the Mount class (nova/virt/disk/mount.py) and the instance
boot without errors. I think it would be safe to use the first partition with a image of the type raw + use_qcow=true in
nova config + lxc as a backend combination.

Revision history for this message
Thierry Carrez (ttx) wrote :

I'm a bit lost. Anyone could confirm this still exists in final Essex / current trunk ?

Revision history for this message
Kevin Jackson (kevin-linuxservices) wrote : Re: [Bug 907269] Re: Mapped device was not found (we can only inject raw disk images): /dev/mapper/nbd15p1) : Precise A1 and E2
Download full text (18.6 KiB)

I've not seen this in a long while - certainly not seen this since the days
of me raising this bug.
I was going to close it until someone else added they had this issue and
then I forgot to keep an eye on it.

You get a +1 from me to close this.

Cheers,

Kev

On 7 June 2012 14:53, Thierry Carrez <email address hidden> wrote:

> I'm a bit lost. Anyone could confirm this still exists in final Essex /
> current trunk ?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/907269
>
> Title:
> Mapped device was not found (we can only inject raw disk images):
> /dev/mapper/nbd15p1) : Precise A1 and E2
>
> Status in OpenStack Compute (Nova):
> Incomplete
>
> Bug description:
> Summary: can't launch instances since fresh install of Precise and
> Essex2 - Show stopper.
>
> Precise Alpha-1 AMD64
> Essex-2 (Ubuntu repo)
>
> Fresh install (this is a compute node, but instances fail to spawn on
> controller where nova-compute also runs)
>
> sudo apt-get install qemu unzip nova-compute python-support
>
> /etc/nova/nova.conf
> --daemonize=1
> --verbose
> --dhcpbridge_flagfile=/etc/nova/nova.conf
> --dhcpbridge=/usr/bin/nova-dhcpbridge
> --force_dhcp_release
> --logdir=/var/log/nova
> --state_path=/var/lib/nova
> --libvirt_type=kvm
> --libvirt_use_virtio_for_bridges
> --sql_connection=mysql://root:nova@172.15.0.1/nova
> --s3_host=172.15.0.1
> --rabbit_host=172.15.0.1
> --ec2_host=172.15.0.1
> --ec2_url=http://172.15.0.1:8773/services/Cloud
> --fixed_range=192.168.0.0/16
> --network_size=256
> --num_networks=1
> --FAKE_subdomain=ec2
> --public_interface=eth1
> --state_path=/var/lib/nova
> --lock_path=/var/lock/nova
> --image_service=nova.image.glance.GlanceImageService
> --glance_api_servers=172.15.0.1:9292
> --iscsi_helper=tgtadm
> --vlan_start=4025
> --vlan_interface=eth0
> --auto_assign_floating_ip
>
> Problem: fail to launch Ubuntu image. Nova says successful, but
> Warning error suggests problem with disk image. Tried Oneiric cloud
> image and Precise cloud image. I've tried m1.tiny and m1.small to see
> if there's a difference and still the same outcome. These same steps
> and config used to work with Oneiric and Diablo.
>
> wget
> http://uec-images.ubuntu.com/releases/oneiric/release/ubuntu-11.10-server-cloudimg-i386.tar.gz
> cloud-publish-tarball ubuntu-11.10-server-cloudimg-i386.tar.gz images i386
> euca-run-instances ami-00000002 -k openstack -t m1.tiny
>
> euca-describe-instances
> INSTANCE i-00000006 ami-00000002 172.15.1.5 server-6
> running openstack (devops, openstack2) 1 m1.tiny
> 2011-12-21T10:19:35Z nova
>
> euca-get-console-output i-0000006
> i-00000006
> 2011-12-21T10:35:42Z
>
>
> /var/log/nova/nova-compute.log
>
> 2011-12-21 10:19:33,277 DEBUG nova.virt.libvirt_conn [-] instance
> instance-00000006: starting toXML method from (pid=6760) to_xml
> /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1183
> 2011-12-21 10:19:33,277 DEBUG nova.virt.libvirt.vif [-] Ensuring vlan
> 4025 and bridge br4025 from (pid=6760) plug
> /usr/lib/python2.7/dist-packages...

Revision history for this message
Thierry Carrez (ttx) wrote :

Closing as suggested, reopen if you reproduce.

Changed in nova:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.