On a fresh install of devstack on a VM with 1GB RAM, when launching a single m1.tiny instance, it sometimes fails with a ComputeResourcesUnavailable exception.
The compute log shows that available resources are subtracted before a resource claim is attempted.
2012-09-23 16:49:56 DEBUG nova.utils [-] Got semaphore "compute_resources" for method "update_available_resource"... from (pid=4278) inner /opt/stack/nova/nova/utils.py:713
2012-09-23 16:50:02 DEBUG nova.compute.resource_tracker [-] Hypervisor: free ram (MB): 197 from (pid=4278) _report_hypervisor_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:447
2012-09-23 16:50:02 DEBUG nova.compute.resource_tracker [-] Hypervisor: free disk (GB): 4 from (pid=4278) _report_hypervisor_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:448
2012-09-23 16:50:02 DEBUG nova.compute.resource_tracker [-] Hypervisor: free VCPUs: 1 from (pid=4278) _report_hypervisor_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:453
2012-09-23 16:50:02 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 483
2012-09-23 16:50:02 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 6
2012-09-23 16:50:02 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 1
2012-09-23 16:50:02 INFO nova.compute.resource_tracker [-] Compute_service record updated for devstack1
2012-09-23 16:50:02 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rebooting_instances from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:50:02 DEBUG nova.manager [-] Skipping ComputeManager._cleanup_running_deleted_instances, 21 ticks left until next run from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:167
2012-09-23 16:50:02 DEBUG nova.manager [-] Running periodic task ComputeManager._check_instance_build_time from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:50:02 DEBUG nova.manager [-] Running periodic task ComputeManager._heal_instance_info_cache from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:50:02 DEBUG nova.manager [-] Skipping ComputeManager._run_image_cache_manager_pass, 31 ticks left until next run from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:167
2012-09-23 16:50:02 DEBUG nova.manager [-] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:50:02 DEBUG nova.compute.manager [-] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=4278) _reclaim_queued_deletes /opt/stack/nova/nova/compute/manager.py:2699
2012-09-23 16:50:02 DEBUG nova.manager [-] Running periodic task ComputeManager._report_driver_status from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:50:02 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:02 DEBUG nova.manager [-] Running periodic task ComputeManager._publish_service_capabilities from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:02 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=4278) _publish_service_capabilities /opt/stack/nova/nova/manager.py:231
2012-09-23 16:51:02 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous fanout cast... from (pid=4278) fanout_cast /opt/stack/nova/nova/openstack/common/rpc/amqp.py:384
2012-09-23 16:51:02 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rescued_instances from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:02 DEBUG nova.manager [-] Running periodic task ComputeManager._sync_power_states from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:05 DEBUG nova.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'anotherrole', u'Member', u'admin'], u'_context_request_id': u'req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442', u'_context_quota_class': None, u'_context_project_name': u'demo', u'_context_service_catalog': [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.4.1:8774/v2/2c012ed232df4f1d877ff80c06c48072', u'region': u'RegionOne', u'publicURL': u'http://192.168.4.1:8774/v2/2c012ed232df4f1d877ff80c06c48072', u'id': u'0ffdf30965914fca873c71c2c5f11be8', u'internalURL': u'http://192.168.4.1:8774/v2/2c012ed232df4f1d877ff80c06c48072'}], u'type': u'compute', u'name': u'nova'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.4.1:3333', u'region': u'RegionOne', u'publicURL': u'http://192.168.4.1:3333', u'id': u'47d653a5a4f04683bf4c13ba9219683b', u'internalURL': u'http://192.168.4.1:3333'}], u'type': u's3', u'name': u's3'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.4.1:9292', u'region': u'RegionOne', u'publicURL': u'http://192.168.4.1:9292', u'id': u'773cb5be5f3b4705a6e56f5d0ebcff3a', u'internalURL': u'http://192.168.4.1:9292'}], u'type': u'image', u'name': u'glance'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.4.1:8776/v1/2c012ed232df4f1d877ff80c06c48072', u'region': u'RegionOne', u'publicURL': u'http://192.168.4.1:8776/v1/2c012ed232df4f1d877ff80c06c48072', u'id': u'9ee6b8812288462281914ae8befeb4c0', u'internalURL': u'http://192.168.4.1:8776/v1/2c012ed232df4f1d877ff80c06c48072'}], u'type': u'volume', u'name': u'cinder'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.4.1:8773/services/Admin', u'region': u'RegionOne', u'publicURL': u'http://192.168.4.1:8773/services/Cloud', u'id': u'ec192fd1cd1a4791ad22653a2dc570c1', u'internalURL': u'http://192.168.4.1:8773/services/Cloud'}], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.4.1:35357/v2.0', u'region': u'RegionOne', u'publicURL': u'http://192.168.4.1:5000/v2.0', u'id': u'9f755326a23c408b8a90cef9db8f0af8', u'internalURL': u'http://192.168.4.1:5000/v2.0'}], u'type': u'identity', u'name': u'keystone'}], u'_context_user_name': u'demo', u'_context_auth_token': '<SANITIZED>', u'args': {u'request_spec': {u'block_device_mapping': [], u'image': {u'status': u'active', u'name': u'cirros-0.3.0-x86_64-uec', u'deleted': False, u'container_format': u'ami', u'created_at': u'2012-09-23T14:39:12.000000', u'disk_format': u'ami', u'updated_at': u'2012-09-23T14:39:13.000000', u'id': u'233f4f99-1826-4368-a88f-58fefa6214bd', u'owner': u'29f71316af1a4f9b9d7bd88b6ba58eba', u'min_ram': 0, u'checksum': u'2f81976cae15c16ef0010c51e3a6c163', u'min_disk': 0, u'is_public': True, u'deleted_at': None, u'properties': {u'kernel_id': u'ca7a5cdc-612c-4446-a088-122b501dd8dc', u'ramdisk_id': u'97b24c9b-7678-4f2a-a8fb-42d05fe7b80a'}, u'size': 25165824}, u'instance_type': {u'disabled': False, u'root_gb': 0, u'name': u'm1.tiny', u'flavorid': u'1', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'extra_specs': {}, u'swap': 0, u'rxtx_factor': 1.0, u'is_public': True, u'deleted_at': None, u'vcpu_weight': None, u'id': 2}, u'instance_properties': {u'vm_state': u'building', u'availability_zone': None, u'launch_index': 0, u'ephemeral_gb': 0, u'instance_type_id': 2, u'user_data': None, u'vm_mode': None, u'reservation_id': u'r-rd0cjoan', u'root_device_name': None, u'user_id': u'51cea18d4ac944d9afafbec05157063d', u'display_description': u'server1', u'key_data': None, u'power_state': 0, u'progress': 0, u'project_id': u'2c012ed232df4f1d877ff80c06c48072', u'config_drive': u'', u'ramdisk_id': u'97b24c9b-7678-4f2a-a8fb-42d05fe7b80a', u'access_ip_v6': None, u'access_ip_v4': None, u'kernel_id': u'ca7a5cdc-612c-4446-a088-122b501dd8dc', u'key_name': None, u'display_name': u'server1', u'config_drive_id': u'', u'root_gb': 0, u'locked': False, u'launch_time': u'2012-09-23T14:51:04Z', u'memory_mb': 512, u'vcpus': 1, u'image_ref': u'233f4f99-1826-4368-a88f-58fefa6214bd', u'architecture': None, u'auto_disk_config': None, u'os_type': None, u'metadata': {}}, u'security_group': [u'default'], u'instance_uuids': [u'8362e9a5-8a9e-41a6-9c2d-295f20d38530']}, u'requested_networks': None, u'filter_properties': {u'config_options': {}, u'limits': {u'memory_mb': 1492.5}, u'request_spec': {u'block_device_mapping': [], u'image': {u'status': u'active', u'name': u'cirros-0.3.0-x86_64-uec', u'deleted': False, u'container_format': u'ami', u'created_at': u'2012-09-23T14:39:12.000000', u'disk_format': u'ami', u'updated_at': u'2012-09-23T14:39:13.000000', u'id': u'233f4f99-1826-4368-a88f-58fefa6214bd', u'owner': u'29f71316af1a4f9b9d7bd88b6ba58eba', u'min_ram': 0, u'checksum': u'2f81976cae15c16ef0010c51e3a6c163', u'min_disk': 0, u'is_public': True, u'deleted_at': None, u'properties': {u'kernel_id': u'ca7a5cdc-612c-4446-a088-122b501dd8dc', u'ramdisk_id': u'97b24c9b-7678-4f2a-a8fb-42d05fe7b80a'}, u'size': 25165824}, u'instance_type': {u'disabled': False, u'root_gb': 0, u'name': u'm1.tiny', u'flavorid': u'1', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'extra_specs': {}, u'swap': 0, u'rxtx_factor': 1.0, u'is_public': True, u'deleted_at': None, u'vcpu_weight': None, u'id': 2}, u'instance_properties': {u'vm_state': u'building', u'availability_zone': None, u'launch_index': 0, u'ephemeral_gb': 0, u'instance_type_id': 2, u'user_data': None, u'vm_mode': None, u'reservation_id': u'r-rd0cjoan', u'root_device_name': None, u'user_id': u'51cea18d4ac944d9afafbec05157063d', u'display_description': u'server1', u'key_data': None, u'power_state': 0, u'progress': 0, u'project_id': u'2c012ed232df4f1d877ff80c06c48072', u'config_drive': u'', u'ramdisk_id': u'97b24c9b-7678-4f2a-a8fb-42d05fe7b80a', u'access_ip_v6': None, u'access_ip_v4': None, u'kernel_id': u'ca7a5cdc-612c-4446-a088-122b501dd8dc', u'key_name': None, u'display_name': u'server1', u'config_drive_id': u'', u'root_gb': 0, u'locked': False, u'launch_time': u'2012-09-23T14:51:04Z', u'memory_mb': 512, u'vcpus': 1, u'image_ref': u'233f4f99-1826-4368-a88f-58fefa6214bd', u'architecture': None, u'auto_disk_config': None, u'os_type': None, u'metadata': {}}, u'security_group': [u'default'], u'instance_uuids': [u'8362e9a5-8a9e-41a6-9c2d-295f20d38530']}, u'instance_type': {u'disabled': False, u'root_gb': 0, u'name': u'm1.tiny', u'flavorid': u'1', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'extra_specs': {}, u'swap': 0, u'rxtx_factor': 1.0, u'is_public': True, u'deleted_at': None, u'vcpu_weight': None, u'id': 2}, u'retry': {u'num_attempts': 1, u'hosts': [u'devstack1']}, u'scheduler_hints': {}}, u'instance': {u'vm_state': u'building', u'availability_zone': None, u'terminated_at': None, u'ephemeral_gb': 0, u'instance_type_id': 2, u'user_data': None, u'vm_mode': None, u'deleted_at': None, u'reservation_id': u'r-rd0cjoan', u'id': 1, u'security_groups': [{u'deleted_at': None, u'user_id': u'51cea18d4ac944d9afafbec05157063d', u'name': u'default', u'deleted': False, u'created_at': u'2012-09-23T14:50:51.000000', u'updated_at': None, u'rules': [], u'project_id': u'2c012ed232df4f1d877ff80c06c48072', u'id': 1, u'description': u'default'}], u'disable_terminate': False, u'user_id': u'51cea18d4ac944d9afafbec05157063d', u'uuid': u'8362e9a5-8a9e-41a6-9c2d-295f20d38530', u'server_name': None, u'default_swap_device': None, u'info_cache': {u'instance_uuid': u'8362e9a5-8a9e-41a6-9c2d-295f20d38530', u'deleted': False, u'created_at': u'2012-09-23T14:51:04.000000', u'updated_at': None, u'network_info': u'[]', u'deleted_at': None, u'id': 1}, u'hostname': u'server1', u'launched_on': None, u'display_description': u'server1', u'key_data': None, u'kernel_id': u'ca7a5cdc-612c-4446-a088-122b501dd8dc', u'power_state': 0, u'default_ephemeral_device': None, u'progress': 0, u'project_id': u'2c012ed232df4f1d877ff80c06c48072', u'launched_at': None, u'scheduled_at': u'2012-09-23T14:51:05.112109', u'ramdisk_id': u'97b24c9b-7678-4f2a-a8fb-42d05fe7b80a', u'access_ip_v6': None, u'access_ip_v4': None, u'deleted': False, u'key_name': None, u'updated_at': u'2012-09-23T14:51:05.249903', u'host': u'devstack1', u'display_name': u'server1', u'task_state': u'scheduling', u'shutdown_terminate': False, u'architecture': None, u'root_gb': 0, u'locked': False, u'name': u'instance-00000001', u'created_at': u'2012-09-23T14:51:04.000000', u'launch_index': 0, u'metadata': [], u'memory_mb': 512, u'instance_type': {u'disabled': False, u'root_gb': 0, u'deleted_at': None, u'name': u'm1.tiny', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'swap': 0, u'rxtx_factor': 1.0, u'is_public': True, u'flavorid': u'1', u'vcpu_weight': None, u'id': 2}, u'vcpus': 1, u'image_ref': u'233f4f99-1826-4368-a88f-58fefa6214bd', u'root_device_name': None, u'auto_disk_config': None, u'os_type': None, u'config_drive': u''}, u'admin_password': '<SANITIZED>', u'injected_files': [], u'is_first_time': True}, u'_context_instance_lock_checked': False, u'_context_is_admin': True, u'version': u'2.0', u'_context_project_id': u'2c012ed232df4f1d877ff80c06c48072', u'_context_timestamp': u'2012-09-23T14:51:02.961390', u'_context_read_deleted': u'no', u'_context_user_id': u'51cea18d4ac944d9afafbec05157063d', u'method': u'run_instance', u'_context_remote_address': u'192.168.4.1'} from (pid=4278) _safe_log /opt/stack/nova/nova/openstack/common/rpc/common.py:195
2012-09-23 16:51:05 DEBUG nova.openstack.common.rpc.amqp [-] unpacked context: {'project_name': u'demo', 'user_id': u'51cea18d4ac944d9afafbec05157063d', 'roles': [u'anotherrole', u'Member', u'admin'], 'timestamp': u'2012-09-23T14:51:02.961390', 'auth_token': '<SANITIZED>', 'remote_address': u'192.168.4.1', 'quota_class': None, 'is_admin': True, 'service_catalog': [{u'endpoints': [{u'adminURL': u'http://192.168.4.1:8774/v2/2c012ed232df4f1d877ff80c06c48072', u'region': u'RegionOne', u'internalURL': u'http://192.168.4.1:8774/v2/2c012ed232df4f1d877ff80c06c48072', u'id': u'0ffdf30965914fca873c71c2c5f11be8', u'publicURL': u'http://192.168.4.1:8774/v2/2c012ed232df4f1d877ff80c06c48072'}], u'endpoints_links': [], u'type': u'compute', u'name': u'nova'}, {u'endpoints': [{u'adminURL': u'http://192.168.4.1:3333', u'region': u'RegionOne', u'internalURL': u'http://192.168.4.1:3333', u'id': u'47d653a5a4f04683bf4c13ba9219683b', u'publicURL': u'http://192.168.4.1:3333'}], u'endpoints_links': [], u'type': u's3', u'name': u's3'}, {u'endpoints': [{u'adminURL': u'http://192.168.4.1:9292', u'region': u'RegionOne', u'internalURL': u'http://192.168.4.1:9292', u'id': u'773cb5be5f3b4705a6e56f5d0ebcff3a', u'publicURL': u'http://192.168.4.1:9292'}], u'endpoints_links': [], u'type': u'image', u'name': u'glance'}, {u'endpoints': [{u'adminURL': u'http://192.168.4.1:8776/v1/2c012ed232df4f1d877ff80c06c48072', u'region': u'RegionOne', u'internalURL': u'http://192.168.4.1:8776/v1/2c012ed232df4f1d877ff80c06c48072', u'id': u'9ee6b8812288462281914ae8befeb4c0', u'publicURL': u'http://192.168.4.1:8776/v1/2c012ed232df4f1d877ff80c06c48072'}], u'endpoints_links': [], u'type': u'volume', u'name': u'cinder'}, {u'endpoints': [{u'adminURL': u'http://192.168.4.1:8773/services/Admin', u'region': u'RegionOne', u'internalURL': u'http://192.168.4.1:8773/services/Cloud', u'id': u'ec192fd1cd1a4791ad22653a2dc570c1', u'publicURL': u'http://192.168.4.1:8773/services/Cloud'}], u'endpoints_links': [], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints': [{u'adminURL': u'http://192.168.4.1:35357/v2.0', u'region': u'RegionOne', u'internalURL': u'http://192.168.4.1:5000/v2.0', u'id': u'9f755326a23c408b8a90cef9db8f0af8', u'publicURL': u'http://192.168.4.1:5000/v2.0'}], u'endpoints_links': [], u'type': u'identity', u'name': u'keystone'}], 'request_id': u'req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442', 'instance_lock_checked': False, 'project_id': u'2c012ed232df4f1d877ff80c06c48072', 'user_name': u'demo', 'read_deleted': u'no'} from (pid=4278) _safe_log /opt/stack/nova/nova/openstack/common/rpc/common.py:195
2012-09-23 16:51:05 DEBUG nova.utils [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] Got semaphore "8362e9a5-8a9e-41a6-9c2d-295f20d38530" for method "do_run_instance"... from (pid=4278) inner /opt/stack/nova/nova/utils.py:713
2012-09-23 16:51:12 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:12 DEBUG nova.manager [-] Running periodic task ComputeManager._instance_usage_audit from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:12 DEBUG nova.manager [-] Running periodic task ComputeManager.update_available_resource from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:12 DEBUG nova.utils [-] Got semaphore "compute_resources" for method "update_available_resource"... from (pid=4278) inner /opt/stack/nova/nova/utils.py:713
libvir: QEMU error : Domain not found: no domain with matching name 'instance-00000001'
2012-09-23 16:51:14 AUDIT nova.compute.manager [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] [instance: 8362e9a5-8a9e-41a6-9c2d-295f20d38530] Starting instance...
2012-09-23 16:51:19 DEBUG nova.compute.resource_tracker [-] Hypervisor: free ram (MB): 110 from (pid=4278) _report_hypervisor_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:447
2012-09-23 16:51:19 DEBUG nova.compute.resource_tracker [-] Hypervisor: free disk (GB): 4 from (pid=4278) _report_hypervisor_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:448
2012-09-23 16:51:19 DEBUG nova.compute.resource_tracker [-] Hypervisor: free VCPUs: 1 from (pid=4278) _report_hypervisor_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:453
2012-09-23 16:51:19 AUDIT nova.compute.resource_tracker [-] Free ram (MB): -29
2012-09-23 16:51:19 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 6
2012-09-23 16:51:19 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0
2012-09-23 16:51:19 INFO nova.compute.resource_tracker [-] Compute_service record updated for devstack1
2012-09-23 16:51:19 DEBUG nova.utils [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] Got semaphore "compute_resources" for method "update_usage"... from (pid=4278) inner /opt/stack/nova/nova/utils.py:713
2012-09-23 16:51:19 DEBUG nova.utils [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] Got semaphore "compute_resources" for method "update_usage"... from (pid=4278) inner /opt/stack/nova/nova/utils.py:713
2012-09-23 16:51:20 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous call on network ... from (pid=4278) multicall /opt/stack/nova/nova/openstack/common/rpc/amqp.py:351
2012-09-23 16:51:20 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 7e273cf620894f9f92e36f4154cb48cd from (pid=4278) multicall /opt/stack/nova/nova/openstack/common/rpc/amqp.py:354
2012-09-23 16:51:20 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rebooting_instances from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:20 DEBUG nova.manager [-] Skipping ComputeManager._cleanup_running_deleted_instances, 20 ticks left until next run from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:167
2012-09-23 16:51:20 DEBUG nova.manager [-] Running periodic task ComputeManager._check_instance_build_time from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:20 DEBUG nova.manager [-] Running periodic task ComputeManager._heal_instance_info_cache from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:20 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous call on network ... from (pid=4278) multicall /opt/stack/nova/nova/openstack/common/rpc/amqp.py:351
2012-09-23 16:51:20 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is f8106a7eb47949c880a7daadf2970d83 from (pid=4278) multicall /opt/stack/nova/nova/openstack/common/rpc/amqp.py:354
2012-09-23 16:51:20 DEBUG nova.openstack.common.rpc.amqp [-] Pool creating new connection from (pid=4278) create /opt/stack/nova/nova/openstack/common/rpc/amqp.py:57
2012-09-23 16:51:20 INFO nova.openstack.common.rpc.common [-] Connected to AMQP server on localhost:5672
2012-09-23 16:51:20 DEBUG nova.compute.manager [-] [instance: 8362e9a5-8a9e-41a6-9c2d-295f20d38530] Updated the info_cache for instance from (pid=4278) _heal_instance_info_cache /opt/stack/nova/nova/compute/manager.py:2393
2012-09-23 16:51:20 DEBUG nova.manager [-] Skipping ComputeManager._run_image_cache_manager_pass, 30 ticks left until next run from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:167
2012-09-23 16:51:20 DEBUG nova.manager [-] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:20 DEBUG nova.compute.manager [-] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=4278) _reclaim_queued_deletes /opt/stack/nova/nova/compute/manager.py:2699
2012-09-23 16:51:20 DEBUG nova.manager [-] Running periodic task ComputeManager._report_driver_status from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:20 INFO nova.compute.manager [-] Updating host status
2012-09-23 16:51:20 DEBUG nova.virt.libvirt.driver [-] Updating host stats from (pid=4278) update_status /opt/stack/nova/nova/virt/libvirt/driver.py:3020
2012-09-23 16:51:32 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=4278) periodic_tasks /opt/stack/nova/nova/manager.py:172
2012-09-23 16:51:50 DEBUG nova.compute.manager [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] [instance: 8362e9a5-8a9e-41a6-9c2d-295f20d38530] Instance network_info: |[VIF({'network': Network({'bridge': u'br100', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [], 'address': u'10.0.0.2'})], 'version': 4, 'meta': {u'dhcp_server': u'10.0.0.1'}, 'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': u'8.8.4.4'})], 'routes': [], 'cidr': u'10.0.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': u'10.0.0.1'})}), Subnet({'ips': [], 'version': None, 'meta': {u'dhcp_server': None}, 'dns': [], 'routes': [], 'cidr': None, 'gateway': IP({'meta': {}, 'version': None, 'type': u'gateway', 'address': None})})], 'meta': {u'tenant_id': None, u'should_create_bridge': True, u'bridge_interface': u'eth1'}, 'id': u'23e839d4-1e99-4ab6-af97-3e3c5fb18015', 'label': u'private'}), 'meta': {}, 'id': u'6a381848-374a-4979-a1a8-83d02226d501', 'address': u'fa:16:3e:7e:ce:b1'})]| from (pid=4278) _allocate_network /opt/stack/nova/nova/compute/manager.py:715
2012-09-23 16:51:50 DEBUG nova.utils [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] Got semaphore "compute_resources" for method "begin_resource_claim"... from (pid=4278) inner /opt/stack/nova/nova/utils.py:713
2012-09-23 16:51:50 AUDIT nova.compute.resource_tracker [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] Attempting claim: memory 512 MB, disk 0 GB, VCPUs 1
2012-09-23 16:51:50 AUDIT nova.compute.resource_tracker [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] Total memory: 995 MB, used: 1024 MB, free: 0 MB
2012-09-23 16:51:50 AUDIT nova.compute.resource_tracker [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] Memory limit: 1492 MB, free: 468 MB
2012-09-23 16:51:50 INFO nova.compute.resource_tracker [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] Unable to claim resources. Free memory 468 MB < requested memory 512 MB
2012-09-23 16:51:50 DEBUG nova.compute.manager [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] [instance: 8362e9a5-8a9e-41a6-9c2d-295f20d38530] Deallocating network for instance from (pid=4278) _deallocate_network /opt/stack/nova/nova/compute/manager.py:769
2012-09-23 16:51:50 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous call on network ... from (pid=4278) multicall /opt/stack/nova/nova/openstack/common/rpc/amqp.py:351
2012-09-23 16:51:50 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 6d9d7881a2654b7b87b661cc5added21 from (pid=4278) multicall /opt/stack/nova/nova/openstack/common/rpc/amqp.py:354
2012-09-23 16:51:55 DEBUG nova.compute.manager [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] [instance: 8362e9a5-8a9e-41a6-9c2d-295f20d38530] Re-scheduling instance: attempt 1 from (pid=4278) _reschedule /opt/stack/nova/nova/compute/manager.py:575
2012-09-23 16:51:55 DEBUG nova.utils [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] Got semaphore "compute_resources" for method "update_usage"... from (pid=4278) inner /opt/stack/nova/nova/utils.py:713
2012-09-23 16:51:55 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous cast on scheduler... from (pid=4278) cast /opt/stack/nova/nova/openstack/common/rpc/amqp.py:376
2012-09-23 16:51:55 ERROR nova.compute.manager [req-6fdf3cb3-f95e-49b9-95ea-d49def3ee442 demo demo] [instance: 8362e9a5-8a9e-41a6-9c2d-295f20d38530] Build error: ['Traceback (most recent call last):\n', ' File "/opt/stack/nova/nova/compute/manager.py", line 486, in _run_instance\n limits):\n', ' File "/opt/stack/nova/nova/compute/resource_tracker.py", line 101, in __enter__\n raise exception.ComputeResourcesUnavailable()\n', 'ComputeResourcesUnavailable: Insufficient compute resources.\n']
I have seen this before.