nova-lvm lvs return -11 and fails with Failed to get udev device handler for device
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Medium
|
Balazs Gibizer | ||
Wallaby |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
Description
===========
Tests within the nova-lvm job fail during cleanup with the following trace visible in n-cpu:
Jun 11 13:04:38.733030 ubuntu-
Jun 11 13:04:38.733030 ubuntu-
Jun 11 13:04:38.733030 ubuntu-
Jun 11 13:04:38.733030 ubuntu-
Bug #1901783 details something simillar to this in Cinder but as the above is coming from native Nova ephemeral storage code with a different return code I'm going to treat this as a separate issue for now.
Steps to reproduce
==================
Only seen as part of the nova-lvm job at present.
Expected result
===============
nova-lvm and the removal of instances succeeds.
Actual result
=============
nova-lvm and the removal of instances fails.
Environment
===========
1. Exact version of OpenStack you are running. See the following
list for all releases: http://
master
2. Which hypervisor did you use?
(For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
What's the version of that?
libvirt
2. Which storage type did you use?
(For example: Ceph, LVM, GPFS, ...)
What's the version of that?
LVM (ephemeral)
3. Which networking type did you use?
(For example: nova-network, Neutron with OpenVSwitch, ...)
N/A
Logs & Configs
==============
As above.
summary: |
- lvs return -11 and fails with Failed to get udev device handler for - device + nova-lvm lvs return -11 and fails with Failed to get udev device handler + for device |
The same issue can happen in other scenarios e.g:
Jun 03 05:20:33.248693 ubuntu- focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager Traceback (most recent call last): focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager File "/opt/stack/ nova/nova/ compute/ manager. py", line 9856, in _update_ available_ resource_ for_node focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager self.rt. update_ available_ resource( context, nodename, focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager File "/opt/stack/ nova/nova/ compute/ resource_ tracker. py", line 879, in update_ available_ resource focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager resources = self.driver. get_available_ resource( nodename) focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager File "/opt/stack/ nova/nova/ virt/libvirt/ driver. py", line 8858, in get_available_ resource focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager disk_info_dict = self._get_ local_gb_ info() focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager File "/opt/stack/ nova/nova/ virt/libvirt/ driver. py", line 7299, in _get_local_gb_info focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager info = lvm.get_ volume_ group_info( focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager File "/opt/stack/ nova/nova/ virt/libvirt/ storage/ lvm.py" , line 92, in get_volume_ group_info focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager out, err = nova.privsep. fs.vginfo( vg) focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager File "/usr/local/ lib/python3. 8/dist- packages/ oslo_privsep/ priv_context. py", line 247, in _wrap focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager return self.channel. remote_ call(name, args, kwargs) focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager File "/usr/local/ lib/python3. 8/dist- packages/ oslo_privsep/ daemon. py", line 224, in remote_call focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager raise exc_type( *result[ 2]) focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager oslo_concurrenc y.processutils. ProcessExecutio nError: Unexpected error while running command. focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager Command: vgs --noheadings --nosuffix --separator | --units b -o vg_size,vg_free stack-volumes- default focal-inap- mtl01-002494645 0 nova-compute[ 90971]: ERROR nova.compute. manager Exit code: -11
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
Jun 03 05:20:33.248693 ubuntu-
J...