unable to create VMs

Bug #1978065 reported by YG Kumar
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
In Progress
Undecided
Unassigned

Bug Description

Hi,

We have an OSA Wallaby setup. When I try to spin up 10 vms at once, 8 vms are getting created fine. But two vms are getting stuck in scheduling state forever. I cant find their uuids as well in any of the nova logs except a GET request for a vm in the nova-wsgi logs. Why are the VMs getting stuck in BUILD state ?

Thanks
Kumar

Revision history for this message
Balazs Gibizer (balazs-gibizer) wrote :

Look at the events of the failed instance
$openstack server event list <vm uuid>
$openstack server event show <vm uuid> <request id from the above output>

Also you can use the request id from the first output to grep the nova service logs for more information.

I'm setting this as Incomplete as there is not enough information to reproduce the problem.

Changed in nova:
status: New → Incomplete
Revision history for this message
YG Kumar (ygk-kmr) wrote :

I extracted the req ID and found these logs in the nova:

-----
Jun 09 06:39:02 nova-api-container-c7d6cb54 nova-api-wsgi[175869]: 2022-06-09 06:39:02.572 175869 INFO nova.api.openstack.requestlog [req-2a5ec3d3-2ae5-454c-88e9-a4e6b0eb5f68 6786d6082bdd4e7b81b9a49d50feccba f5446845ac2349c7ac7bd85aa8b50304 - default default] 10.167.37.198 "POST /v2.1/servers" status: 202 len: 426 microversion: 2.88 time: 3.589839

Jun 09 09:39:25 nova-api-container-888ff552 nova-api-wsgi[175081]: 2022-06-09 09:39:25.014 175081 INFO nova.api.openstack.requestlog [req-6a1c438f-5d02-4be3-82e5-f81baa46f31b 6786d6082bdd4e7b81b9a49d50feccba f5446845ac2349c7ac7bd85aa8b50304 - default default] 10.167.37.198 "GET /v2.1/servers/732f0787-a8be-4802-b6fa-7af21c0cfdaf/os-instance-actions/req-2a5ec3d3-2ae5-454c-88e9-a4e6b0eb5f68" status: 200 len: 327 microversion: 2.1 time: 0.102009

Jun 09 06:39:04 nova-api-container-e3b2866e nova-scheduler[174263]: 2022-06-09 06:39:04.689 174263 WARNING nova.scheduler.host_manager [req-2a5ec3d3-2ae5-454c-88e9-a4e6b0eb5f68 6786d6082bdd4e7b81b9a49d50feccba f5446845ac2349c7ac7bd85aa8b50304 - default default] Host text.prod.example.com has more disk space than database expected (13476 GB > 12982 GB)

But the nova list still shows the server as building

--
root@utility-container-28821c62:~# nova list | grep sample
| 732f0787-a8be-4802-b6fa-7af21c0cfdaf | sample-8 | BUILD | scheduling | NOSTATE |

Changed in nova:
status: Incomplete → In Progress
Revision history for this message
YG Kumar (ygk-kmr) wrote :

These are the successful vm's logs
----

Jun 14 09:05:38 test-nova-api-container-888ff552 nova-api-wsgi[175088]: 2022-06-14 09:05:38.723 175088 INFO nova.api.openstack.compute.server_external_events [req-11d26ed3-a1fe-4ef1-a720-2c02ba87f36f dda39ff182134bdd9a9fa76272948007 9857308de2374ac6ad6ae5a7b74ea84e - default default] Creating event network-changed:00337891-94e9-4a27-a641-d08026a98ca5 for instance 3b21ada9-9770-4772-8be7-fbef53f4896a on test

Revision history for this message
YG Kumar (ygk-kmr) wrote :
Download full text (5.8 KiB)

Jun 14 09:05:41 test039-01-nova-api-container-c7d6cb54 nova-conductor[211058]: 2022-06-14 09:05:41.874 211058 ERROR nova.scheduler.utils [req-63606504-ed6a-4f01-bce2-f0f827f61431 6786d6082bdd4e7b81b9a49d50feccba f5446845ac2349c7ac7bd85aa8b50304 - default default] [instance: 2f3e777c-c5a9-4dd9-95ce-91fc9d904be7] Error from last host: test039-16 (node test039-16.prod.test.com): ['Traceback (most recent call last):\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/compute/manager.py", line 2409, in _build_and_run_instance\n self.driver.spawn(context, instance, image_meta,\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", line 4189, in spawn\n xml = self._get_guest_xml(context, instance, network_info,\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", line 7020, in _get_guest_xml\n network_info_str = str(network_info)\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/network/model.py", line 617, in __str__\n return self._sync_wrapper(fn, *args, **kwargs)\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/network/model.py", line 600, in _sync_wrapper\n self.wait()\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/network/model.py", line 632, in wait\n self[:] = self._gt.wait()\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/eventlet/greenthread.py", line 181, in wait\n return self._exit_event.wait()\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/eventlet/event.py", line 125, in wait\n result = hub.switch()\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/eventlet/hubs/hub.py", line 313, in switch\n return self.greenlet.switch()\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/eventlet/greenthread.py", line 221, in main\n result = function(*args, **kwargs)\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/utils.py", line 660, in context_wrapper\n return func(*args, **kwargs)\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/compute/manager.py", line 1780, in _allocate_network_async\n raise e\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/compute/manager.py", line 1759, in _allocate_network_async\n nwinfo = self.network_api.allocate_for_instance(\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/network/neutron.py", line 1092, in allocate_for_instance\n created_port_ids = self._update_ports_for_instance(\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/network/neutron.py", line 1224, in _update_ports_for_instance\n vif.destroy()\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/openstack/venvs/nova-23.2.0/lib/python3.8/site-packages/nova/network/neutron...

Read more...

Revision history for this message
YG Kumar (ygk-kmr) wrote :

Able to find out the root cause. You can close this bug

Revision history for this message
wangkuntian (wangkuntian) wrote :

It seems that the error occurs when neutron bind the port to the instance. Maybe you should show your neutron's log and check your neutron's serverice on your compute node.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.