UEFI not supported on Xena

Bug #1955035 reported by Daniel Niasoff
34
This bug affects 7 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Confirmed
Undecided
Unassigned

Bug Description

I migrated from Ubuntu Focal to Centos Stream 8 as the container OS for my OpenStack nova node. I am using Kolla to build and deploy the container images which are built from source using the latest code in the stable/xena branch.

Since the migration I have not been able to run any instances that have UEFI enabled.

Not sure if relevant, but I am booting the host node using UEFI.

Here is the error

2021-12-16 12:18:17.516 7 INFO nova.compute.manager [-] [instance: ac977c9b-c680-4bbc-b3bd-877185463fa6] VM Resumed (Lifecycle Event)
2021-12-16 12:18:17.518 7 INFO nova.virt.libvirt.driver [-] [instance: ac977c9b-c680-4bbc-b3bd-877185463fa6] Instance rebooted successfully.
2021-12-16 12:18:17.545 7 INFO nova.compute.manager [req-ec01253c-3143-455f-be7c-8eb4f1246985 - - - - -] [instance: ac977c9b-c680-4bbc-b3bd-877185463fa6] During sync_power_state the instance has a pending task (powering-on). Skip.
2021-12-16 12:18:17.546 7 INFO nova.compute.manager [req-ec01253c-3143-455f-be7c-8eb4f1246985 - - - - -] [instance: ac977c9b-c680-4bbc-b3bd-877185463fa6] VM Started (Lifecycle Event)
2021-12-16 12:18:17.585 7 INFO nova.compute.manager [req-ec01253c-3143-455f-be7c-8eb4f1246985 - - - - -] [instance: ac977c9b-c680-4bbc-b3bd-877185463fa6] During sync_power_state the instance has a pending task (powering-on). Skip.
2021-12-16 12:18:19.286 7 WARNING nova.compute.manager [req-26f1466f-1c8e-4da5-bc16-5bc9407f4564 f6215b2b71024bbf989ee9c73201aa5a f658461a492f4d68846843779506a194 - default default] [instance: ac977c9b-c680-4bbc-b3bd-877185463fa6] Received unexpected event network-vif-plugged-d0a2bc85-5268-475c-804e-aff00d2e0b26 for instance with vm_state active and task_state None.
2021-12-16 12:18:42.834 7 INFO nova.compute.claims [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Claim successful on node node02.qumulus.io
2021-12-16 12:18:43.018 7 INFO nova.virt.libvirt.driver [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
2021-12-16 12:18:43.093 7 INFO nova.virt.block_device [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Booting with volume 8c1c22bb-3e3b-43ca-a36f-4be6570b9bd1 at /dev/vda
2021-12-16 12:18:43.181 7 INFO oslo.privsep.daemon [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp1t7lo_re/privsep.sock']
2021-12-16 12:18:43.645 7 INFO oslo.privsep.daemon [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] Spawned new privsep daemon via rootwrap
2021-12-16 12:18:43.564 131 INFO oslo.privsep.daemon [-] privsep daemon starting
2021-12-16 12:18:43.566 131 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
2021-12-16 12:18:43.568 131 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2021-12-16 12:18:43.568 131 INFO oslo.privsep.daemon [-] privsep daemon running as pid 131
2021-12-16 12:18:43.765 7 WARNING os_brick.initiator.connectors.nvmeof [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] Process execution error in _get_host_uuid: Unexpected error while running command.
Command: blkid overlay -s UUID -o value
Exit code: 2
Stdout: ''
Stderr: '': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
2021-12-16 12:18:44.386 7 INFO nova.virt.libvirt.driver [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Creating image
2021-12-16 12:18:44.793 7 WARNING nova.virt.libvirt.driver [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] uefi support is without some kind of functional testing and therefore considered experimental.
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Instance failed to spawn: nova.exception.UEFINotSupported: UEFI is not supported
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Traceback (most recent call last):
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py", line 2638, in _build_resources
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] yield resources
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py", line 2402, in _build_and_run_instance
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] accel_info=accel_info)
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 4219, in spawn
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] mdevs=mdevs, accel_info=accel_info)
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7062, in _get_guest_xml
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] context, mdevs, accel_info)
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 6673, in _get_guest_config
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] self._configure_guest_by_virt_type(guest, instance, image_meta, flavor)
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 6284, in _configure_guest_by_virt_type
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] has_secure_boot=guest.os_loader_secure)
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/host.py", line 1681, in get_loader
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] raise exception.UEFINotSupported()
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] nova.exception.UEFINotSupported: UEFI is not supported
2021-12-16 12:18:44.795 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb]
2021-12-16 12:18:44.799 7 INFO nova.compute.manager [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Terminating instance
2021-12-16 12:18:44.805 7 INFO nova.virt.libvirt.driver [-] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Instance destroyed successfully.
2021-12-16 12:18:44.808 7 INFO os_vif [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:f8:59:ea,bridge_name='qbr3fa2e5ce-93',has_traffic_filtering=True,id=3fa2e5ce-9343-4860-8165-d52b7eba2f07,network=Network(239be548-514c-4c1d-871f-7f933b9dcd29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fa2e5ce-93')
2021-12-16 12:18:44.809 7 WARNING nova.virt.libvirt.volume.mount [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] Request to remove attachment (volume-8c1c22bb-3e3b-43ca-a36f-4be6570b9bd1, 1ab7eba3-dd5b-4f0c-b104-779b54f069eb) from /var/lib/nova/mnt/1a8925540ef815e5414514da4c403f3d, but we don't think it's in use.
2021-12-16 12:18:44.810 7 INFO nova.virt.libvirt.driver [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Deleting instance files /var/lib/nova/instances/1ab7eba3-dd5b-4f0c-b104-779b54f069eb_del
2021-12-16 12:18:44.810 7 INFO nova.virt.libvirt.driver [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Deletion of /var/lib/nova/instances/1ab7eba3-dd5b-4f0c-b104-779b54f069eb_del complete
2021-12-16 12:18:44.861 7 INFO nova.compute.manager [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Took 0.06 seconds to destroy the instance on the hypervisor.
2021-12-16 12:18:45.052 7 INFO nova.compute.manager [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Took 0.19 seconds to detach 1 volumes for instance.
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Failed to build and run instance: nova.exception.UEFINotSupported: UEFI is not supported
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Traceback (most recent call last):
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py", line 2402, in _build_and_run_instance
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] accel_info=accel_info)
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 4219, in spawn
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] mdevs=mdevs, accel_info=accel_info)
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7062, in _get_guest_xml
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] context, mdevs, accel_info)
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 6673, in _get_guest_config
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] self._configure_guest_by_virt_type(guest, instance, image_meta, flavor)
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 6284, in _configure_guest_by_virt_type
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] has_secure_boot=guest.os_loader_secure)
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/host.py", line 1681, in get_loader
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] raise exception.UEFINotSupported()
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] nova.exception.UEFINotSupported: UEFI is not supported
2021-12-16 12:18:45.147 7 ERROR nova.compute.manager [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb]
2021-12-16 12:18:45.158 7 INFO os_vif [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:f8:59:ea,bridge_name='qbr3fa2e5ce-93',has_traffic_filtering=True,id=3fa2e5ce-9343-4860-8165-d52b7eba2f07,network=Network(239be548-514c-4c1d-871f-7f933b9dcd29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fa2e5ce-93')
2021-12-16 12:18:45.465 7 INFO nova.compute.manager [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] [instance: 1ab7eba3-dd5b-4f0c-b104-779b54f069eb] Took 0.31 seconds to deallocate network for instance.
2021-12-16 12:18:45.564 7 INFO nova.scheduler.client.report [req-6ab04766-9531-438e-8682-156817707017 191c0cf8228043648b7341a25dd5a3d9 a9f3babb231b4251bd466fc91a0a1e31 - default default] Deleted allocations for instance 1ab7eba3-dd5b-4f0c-b104-779b54f069eb

Thanks

Tags: libvirt
description: updated
tags: added: libvirt
Revision history for this message
Matthew Thode (prometheanfire) wrote :

upgrading from w->x without chaning the OS version also had me hit this.

seems to be introduced in faad45b6323d7c52d35b7ccc45eacb5580b3b4d3

summary: - UEFI not supported on Centos Stream 8
+ UEFI not supported on Xena
Revision history for this message
Adam Zheng (wazh7587) wrote :

I ran into this issue as well after upgrading nova containers built off the latest stable/wallaby
After deployment of nova-libvirt, new uefi machines no longer boot and the following is logged

2022-03-22 17:02:32.102 7 ERROR nova.compute.manager [req-9b5bf117-f72b-4ea6-b059-e937e2db6212 89fee540585c9929befdb0c40c845dc47552126acf7b8b18fafce5393cdd818c b8d86bc97cf54605ae41d37ac70178b4 - d3acf2f34f8f47ca987b500937c7be58 d3acf2f34f8f47ca987b500937c7be58] [instance: 6aaa5ee5-aa10-4be5-b943-d8dea6a5080a] Failed to build and run instance: nova.exception.UEFINotSupported: UEFI is not supported

Existing uefi machines could still reboot, but would no longer boot if shut down at the hypervisor.

I had to re-deploy the older nova-libvirt container as a workaround for now.

# The old centos8s container, based off master from around 5 months ago.
()[nova@0cc7e1c86c17 /]$ ls /nova-base-source/
nova-23.0.3.dev23

# The new centos8s container, based off stable/wallaby from 2-3 days ago.
()[root@67c6853d02ba /]# ls /nova-base-source/
nova-23.2.1.dev1

I've tried building containers off several other different branches and versions of nova (including master), but all result in the same error. Only deploying that old container again as a bandaid solution, for now, allows uefi instances to boot again.

Revision history for this message
Daniel Niasoff (5-daniel-o) wrote :

Just wanted to point out that I am now able to boot using UEFI.

I changed servers to new hardware and upgraded to Centos Stream 8 and it's working fine now. Deploying using Kolla and Nova containers were built using the latest code from Xena.

SecureBoot doesn't work but that's probably because the hypervisor wasn't booted using SecureBoot.

Revision history for this message
melanie witt (melwitt) wrote :

I am not sure but I wonder if this issue is related to the following bug where some OS machine types are missing from firmware descriptor files [1]. This is the MR that has been opened for that:

https://gitlab.com/redhat/centos-stream/src/edk2/-/merge_requests/13

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2090752

Revision history for this message
Magnus Lööf (magnus-loof) wrote :

I can reproduce this with Wallaby binary containers from 2022-08-24, 2022-08-16, 2022-06-01.

Confirmed to work on earlier Victoria binary containers.

Revision history for this message
Sylvain Bauza (sylvain-bauza) wrote :

Can someone reproduce it with a devstack environment *without* using containers ?

Changed in nova:
status: New → Incomplete
Pierre Riteau (priteau)
Changed in nova:
status: Incomplete → Confirmed
Revision history for this message
Pierre Riteau (priteau) wrote :

I can reproduce this using recent Kolla Wallaby container images on CentOS Stream 8.

The problem happens in get_loader():

machine = self.get_canonical_machine_type(arch, machine)

=> this sets machine to pc-q35-rhel8.6.0

If has_secure_boot is false, only 50-edk2-ovmf-cc.json from edk2-ovmf would match, but it only supports machine pc-q35-rhel8.5.0, so we fail to return a loader and instead raise exception.UEFINotSupported().

This fixes it:

sed -i 's/"pc-q35-rhel8.5.0"/"pc-q35-rhel8.5.0", "pc-q35-rhel8.6.0"/' /usr/share/qemu/firmware/50-edk2-ovmf-cc.json

Revision history for this message
Pierre Riteau (priteau) wrote :

I've changed back the bug status to confirmed, but this is really a distribution bug, not a Nova bug. Feel free to change status as appropriate.

Revision history for this message
Sven Spescha (gainward700) wrote :

I also ran into this problem with Xena on containers CentOS Stream 8.
See --> Bug #2003185

Unfortunatelly I wasn't able to fix it. Also not with the solution @priteau suggested.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.