nova API NoValidHost: No valid host was found, resource view only use phys_disk local Hypervisor
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Expired
|
Undecided
|
Unassigned |
Bug Description
Description case of use:
When launch a instance in the controller on nova conductor appears "NoValidHost: No valid host was found."
this case appears in Pike relase and Queens:
for example resources on the compute:
INFO nova.compute.
my solution of storage is one by LVM with 1TB using cinder-volume, and the other case exists an storage dell FC4020 with 10T free. the creation volume works without problem and attaching also. The problem is the scheduler don't know free space in the resource and only view phys_disk on the local disk in the compute.
sample case:
1) instance one machine with flavor root_disk=31G ram=1G vcpu=2, works with LVM or StorageFC
2) instance machine number two with flavor root_disk=31G ram=1G vcpu=2, works with LVM or StorageFC
3) when instance 3th machine appears "..NoValidHost.." becouse phys_disk=72 and 31*3 = 93GB and 72G<93G.
4) the machine works if the sum of the flavor of the disc does not exceed 31G. A case that may happen is that if the flavor of the disk is greater than 72G it will not work however I can generate several volumes with cinder-volume exceeding the capacity of the local disk of the compute node I can also attach the disks without problems.
5) one solution maybe appears in the documentation https:/
"...possible to exclude the CoreFilter, RamFilter and DiskFilter from the list of enabled FilterScheduler filters such that scheduling decisions are not based on CPU, RAM or disk usage. Once all computes are reporting into the Placement service, however, and the FilterScheduler starts to use the Placement service for decisions, those excluded filters are ignored and the scheduler will make requests based on VCPU, MEMORY_MB and DISK_GB inventory. If you wish to effectively ignore that type of resource for placement decisions, you will need to adjust the corresponding cpu_allocation_
in my case i used it disk_allocation
but i can't disable filter disk with this value.
the solution for now is:
openstack flavor set --property resources:DISK_GB=0 idflavor
| eeed1adb-
information properties:
properties | aggregate_
this "patch" with "resources:
another solution is the root_disk = 0 in the flavor, but in the documentation appears this's only use for testing or booting instance.
and i don't know if it's the correct way for this solution or it's a wrong information with nova-placement in the relationship cinder resource capacity.
i'm excuse me for my case, but i read it diferent cases and i thinking in report it.
Description packages :
OpenSuse 42.3 pike
Repositorio : Pike
Nombre : openstack-
Versión : 16.1.5~dev57-1.1
Arquitectura : noarch
Proveedor : obs://build.
Tamaño de instalación : 21,5 KiB
Instalado : Si
Estado : obsoleto (instalada la versión 16.1.5~dev49-1.1)
Paquete de fuentes : openstack-
Resumen : OpenStack Compute (Nova) - Compute
-------
Repositorio : Pike
Nombre : openstack-
Versión : 16.1.5~dev57-1.1
Arquitectura : noarch
Proveedor : obs://build.
Tamaño de instalación : 11,1 KiB
Instalado : Si
Estado : obsoleto (instalada la versión 16.1.5~dev49-1.1)
Paquete de fuentes : openstack-
Resumen : OpenStack Compute (Nova) - Scheduler
Descripción :
This package contains the scheduler for OpenStack.
-------
Repositorio : Pike
Nombre : openstack-nova-api
Versión : 16.1.5~dev57-1.1
Arquitectura : noarch
Proveedor : obs://build.
Tamaño de instalación : 18,0 KiB
Instalado : Si
Estado : obsoleto (instalada la versión 16.1.5~dev49-1.1)
Paquete de fuentes : openstack-
Resumen : OpenStack Compute (Nova) - API
Descripción :
This package contains the OpenStack Nova API.
Repositorio : Pike
Nombre : openstack-
Versión : 16.1.5~dev57-1.1
Arquitectura : noarch
Proveedor : obs://build.
Tamaño de instalación : 14,3 KiB
Instalado : Si
Estado : obsoleto (instalada la versión 16.1.5~dev49-1.1)
Paquete de fuentes : openstack-
Resumen : OpenStack Compute (Nova) - Placement API
Ubuntu 18 LTS queen:
Package: nova-compute
Version: 2:17.0.5-0ubuntu1
State: installed
Automatically installed: no
Priority: extra
Section: net
Maintainer: Ubuntu Developers <email address hidden>
Architecture: all
Package: nova-scheduler
Version: 2:17.0.5-0ubuntu1
State: installed
Automatically installed: no
Priority: extra
Section: net
Maintainer: Ubuntu Developers <email address hidden>
Package: nova-api
Version: 2:17.0.5-0ubuntu1
State: installed
Automatically installed: no
Priority: extra
Section: net
Maintainer: Ubuntu Developers <email address hidden>
Package: nova-placement-api
Version: 2:17.0.5-0ubuntu1
State: installed
Automatically installed: no
Priority: extra
Section: universe/net
Maintainer: Ubuntu Developers <email address hidden>
regards
description: | updated |
description: | updated |
description: | updated |
tags: | added: scheduler |
same my problem :
openstack hypervisor stats show
+------ ------- ------- --+---- ---+ ------- ------- --+---- ---+ least | 115 | ------- ------- --+---- ---+
| Field | Value |
+------
| count | 1 |
| current_workload | 0 |
| disk_available_
| free_disk_gb | 25 |
| free_ram_mb | 5434 |
| local_gb | 125 |
| local_gb_used | 100 |
| memory_mb | 7982 |
| memory_mb_used | 2548 |
| running_vms | 1 |
| vcpus | 8 |
| vcpus_used | 1 |
+------
Although i have two lvm :
cinder get-pools --detail
+------ ------- ------- ------- --+---- ------- ------- ------- ------- ------- ------+ ------- ------- ------- --+---- ------- ------- ------- ------- ------- ------+ capacity_ gb | 0 | :r2s13: cinder- volumes: thin:0 | subscription_ ratio | 20.0 | capacity_ gb | 0.0 | ing_support | False | ng_support | True | 17T12:18: 22.024282 | ------- ------- ------- --+---- ------- ------- ------- ------- ------- ------+ ------- ------- ------- --+---- ------- ------- ------- ------- ------- ------- + ------- ------- ------- --+---- ------- ------- ------- ------- ------- ------- + capacity_ gb | 100 |
| Property | Value |
+------
| QoS_support | False |
| allocated_
| backend_state | up |
| driver_version | 3.0.0 |
| filter_function | None |
| free_capacity_gb | 171.0 |
| goodness_function | None |
| location_info | LVMVolumeDriver
| max_over_
| multiattach | True |
| name | r2s13@lvm#LVM_iSCSI |
| pool_name | LVM_iSCSI |
| provisioned_
| reserved_percentage | 0 |
| storage_protocol | iSCSI |
| thick_provision
| thin_provisioni
| timestamp | 2018-12-
| total_capacity_gb | 171.0 |
| total_volumes | 1 |
| vendor_name | Open Source |
| volume_backend_name | LVM_iSCSI |
+------
+------
| Property | Value |
+------
| QoS_support | False |
| allocated_
| backend_state | up |
| driver_version | 3.0.0 ...