running with --num-instances=16 I saw a couple of instances go into ERROR State, on the hypervisor side, i saw the following issue:
2015-02-04 09:03:02.840 5077 ERROR nova.compute.manager [-] [instance: e277cf66-167f-4e81-a141-8dec12290015] Instance failed to spawn
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] Traceback (most recent call last):
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2243, in _build_resources
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] yield resources
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in _build_and_run_instance
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] block_device_info=block_device_info)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2622, in spawn
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] block_device_info, disk_info=disk_info)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4508, in _create_domain_and_network
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] power_on=power_on)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4432, in _create_domain
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] LOG.error(err)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] six.reraise(self.type_, self.value, self.tb)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4423, in _create_domain
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] domain.createWithFlags(launch_flags)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] rv = execute(f, *args, **kwargs)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] six.reraise(c, e, tb)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] rv = meth(*args, **kwargs)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] File "/usr/lib64/python2.7/site-packages/libvirt.py", line 993, in createWithFlags
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015] libvirtError: error from service: CreateMachine: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: e277cf66-167f-4e81-a141-8dec12290015]
2015-02-04 09:03:02.843 5077 AUDIT nova.compute.manager [req-663bcedd-8f56-4a84-81b1-4e7321a5f30e None] [instance: e277cf66-167f-4e81-a141-8dec12290015] Terminating instance
Reviewed: https:/ /review. openstack. org/153004 /git.openstack. org/cgit/ openstack/ nova/commit/ ?id=5a542e77064 8469b0fbb638f6b a53f95424252ec
Committed: https:/
Submitter: Jenkins
Branch: master
commit 5a542e770648469 b0fbb638f6ba53f 95424252ec
Author: Dan Smith <email address hidden>
Date: Wed Feb 4 10:10:25 2015 -0800
Add max_concurrent_ builds limit configuration
Right now, nova-compute will attempt to build an infinite number of
instances, if asked to do so. This won't work on any machine, regardless
of the resources, if the number of instances is too large.
We could default this to zero to retain the current behavior, but
the current behavior is really not sane in any case, so I think we
should default to something. Ten instances for a single compute node
seems like as reasonable default. If you can do more than ten at a
time, you're definitely not running a cloud based on default config.
DocImpact: Adds a new configuration variable
Closes-Bug: #1418155
Change-Id: I412d2849fd1643 0e6926fc983c031 babb7ad04d0