nova API NoValidHost: No valid host was found, resource view only use phys_disk local Hypervisor

Bug #1787846 reported by Rodolfo on 2018-08-19
18
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Undecided
Unassigned

Bug Description

Description case of use:

When launch a instance in the controller on nova conductor appears "NoValidHost: No valid host was found."

this case appears in Pike relase and Queens:

for example resources on the compute:

INFO nova.compute.resource_tracker [req-20c5da6c-a550-408d-85dc-92039e41a3fd - - - - -] Final resource view: name=compute2.example.local phys_ram=15793MB used_ram=2608MB phys_disk=72GB used_disk=62GB total_vcpus=6 used_vcpus=4 pci_stats=[]

my solution of storage is one by LVM with 1TB using cinder-volume, and the other case exists an storage dell FC4020 with 10T free. the creation volume works without problem and attaching also. The problem is the scheduler don't know free space in the resource and only view phys_disk on the local disk in the compute.

sample case:

1) instance one machine with flavor root_disk=31G ram=1G vcpu=2, works with LVM or StorageFC
2) instance machine number two with flavor root_disk=31G ram=1G vcpu=2, works with LVM or StorageFC
3) when instance 3th machine appears "..NoValidHost.." becouse phys_disk=72 and 31*3 = 93GB and 72G<93G.

4) the machine works if the sum of the flavor of the disc does not exceed 31G. A case that may happen is that if the flavor of the disk is greater than 72G it will not work however I can generate several volumes with cinder-volume exceeding the capacity of the local disk of the compute node I can also attach the disks without problems.

5) one solution maybe appears in the documentation https://docs.openstack.org/nova/queens/user/placement.html about :
"...possible to exclude the CoreFilter, RamFilter and DiskFilter from the list of enabled FilterScheduler filters such that scheduling decisions are not based on CPU, RAM or disk usage. Once all computes are reporting into the Placement service, however, and the FilterScheduler starts to use the Placement service for decisions, those excluded filters are ignored and the scheduler will make requests based on VCPU, MEMORY_MB and DISK_GB inventory. If you wish to effectively ignore that type of resource for placement decisions, you will need to adjust the corresponding cpu_allocation_ratio, ram_allocation_ratio, and/or disk_allocation_ratio configuration options to be very high values, e.g. 9999.0...."

in my case i used it disk_allocation_ratio = 9999.0

but i can't disable filter disk with this value.

the solution for now is:

openstack flavor set --property resources:DISK_GB=0 idflavor

| eeed1adb-d9b1-40c8-9147-78f8500c99ab | m2d150c2 | 2048 | 150 | 0 | 2 | True |

information properties:

properties | aggregate_instance_extra_specs:novanitrogeno='true', resources:DISK_GB='0'

this "patch" with "resources:DISK_GB=0" works for me.

another solution is the root_disk = 0 in the flavor, but in the documentation appears this's only use for testing or booting instance.

and i don't know if it's the correct way for this solution or it's a wrong information with nova-placement in the relationship cinder resource capacity.

i'm excuse me for my case, but i read it diferent cases and i thinking in report it.

Description packages :

OpenSuse 42.3 pike

Repositorio : Pike
Nombre : openstack-nova-compute
Versión : 16.1.5~dev57-1.1
Arquitectura : noarch
Proveedor : obs://build.opensuse.org/Cloud:OpenStack
Tamaño de instalación : 21,5 KiB
Instalado : Si
Estado : obsoleto (instalada la versión 16.1.5~dev49-1.1)
Paquete de fuentes : openstack-nova-16.1.5~dev57-1.1.src
Resumen : OpenStack Compute (Nova) - Compute

--------------------------------------------------
Repositorio : Pike
Nombre : openstack-nova-scheduler
Versión : 16.1.5~dev57-1.1
Arquitectura : noarch
Proveedor : obs://build.opensuse.org/Cloud:OpenStack
Tamaño de instalación : 11,1 KiB
Instalado : Si
Estado : obsoleto (instalada la versión 16.1.5~dev49-1.1)
Paquete de fuentes : openstack-nova-16.1.5~dev57-1.1.src
Resumen : OpenStack Compute (Nova) - Scheduler
Descripción :
    This package contains the scheduler for OpenStack.

--------------------------------------------
Repositorio : Pike
Nombre : openstack-nova-api
Versión : 16.1.5~dev57-1.1
Arquitectura : noarch
Proveedor : obs://build.opensuse.org/Cloud:OpenStack
Tamaño de instalación : 18,0 KiB
Instalado : Si
Estado : obsoleto (instalada la versión 16.1.5~dev49-1.1)
Paquete de fuentes : openstack-nova-16.1.5~dev57-1.1.src
Resumen : OpenStack Compute (Nova) - API
Descripción :
    This package contains the OpenStack Nova API.

Repositorio : Pike
Nombre : openstack-nova-placement-api
Versión : 16.1.5~dev57-1.1
Arquitectura : noarch
Proveedor : obs://build.opensuse.org/Cloud:OpenStack
Tamaño de instalación : 14,3 KiB
Instalado : Si
Estado : obsoleto (instalada la versión 16.1.5~dev49-1.1)
Paquete de fuentes : openstack-nova-16.1.5~dev57-1.1.src
Resumen : OpenStack Compute (Nova) - Placement API

Ubuntu 18 LTS queen:

Package: nova-compute
Version: 2:17.0.5-0ubuntu1
State: installed
Automatically installed: no
Priority: extra
Section: net
Maintainer: Ubuntu Developers <email address hidden>
Architecture: all

Package: nova-scheduler
Version: 2:17.0.5-0ubuntu1
State: installed
Automatically installed: no
Priority: extra
Section: net
Maintainer: Ubuntu Developers <email address hidden>

Package: nova-api
Version: 2:17.0.5-0ubuntu1
State: installed
Automatically installed: no
Priority: extra
Section: net
Maintainer: Ubuntu Developers <email address hidden>

Package: nova-placement-api
Version: 2:17.0.5-0ubuntu1
State: installed
Automatically installed: no
Priority: extra
Section: universe/net
Maintainer: Ubuntu Developers <email address hidden>

regards

Rodolfo (atomrag) wrote :
Rodolfo (atomrag) on 2018-08-19
description: updated
Rodolfo (atomrag) on 2018-08-21
description: updated
Rodolfo (atomrag) on 2018-08-21
description: updated
bel (varr) wrote :
Download full text (4.6 KiB)

same my problem :

openstack hypervisor stats show

+----------------------+-------+
| Field | Value |
+----------------------+-------+
| count | 1 |
| current_workload | 0 |
| disk_available_least | 115 |
| free_disk_gb | 25 |
| free_ram_mb | 5434 |
| local_gb | 125 |
| local_gb_used | 100 |
| memory_mb | 7982 |
| memory_mb_used | 2548 |
| running_vms | 1 |
| vcpus | 8 |
| vcpus_used | 1 |
+----------------------+-------+

Although i have two lvm :
cinder get-pools --detail

+-----------------------------+---------------------------------------------+
| Property | Value |
+-----------------------------+---------------------------------------------+
| QoS_support | False |
| allocated_capacity_gb | 0 |
| backend_state | up |
| driver_version | 3.0.0 |
| filter_function | None |
| free_capacity_gb | 171.0 |
| goodness_function | None |
| location_info | LVMVolumeDriver:r2s13:cinder-volumes:thin:0 |
| max_over_subscription_ratio | 20.0 |
| multiattach | True |
| name | r2s13@lvm#LVM_iSCSI |
| pool_name | LVM_iSCSI |
| provisioned_capacity_gb | 0.0 |
| reserved_percentage | 0 |
| storage_protocol | iSCSI |
| thick_provisioning_support | False |
| thin_provisioning_support | True |
| timestamp | 2018-12-17T12:18:22.024282 |
| total_capacity_gb | 171.0 |
| total_volumes | 1 |
| vendor_name | Open Source |
| volume_backend_name | LVM_iSCSI |
+-----------------------------+---------------------------------------------+
+-----------------------------+----------------------------------------------+
| Property | Value |
+-----------------------------+----------------------------------------------+
| QoS_support | False |
| allocated_capacity_gb | 100 |
| backend_state | up |
| driver_version | 3.0.0 ...

Read more...

tags: added: scheduler
Balazs Gibizer (balazs-gibizer) wrote :

Does your storage configured as local storage or as cinder backend?
Do you try to boot from volume?
What is the backend of the instance directory (i.e. [DEFAULT]/instances_path) on your compute?

I'm marking this Incomplete please set it back to New when you reply.

[1] https://docs.openstack.org/nova/ussuri/configuration/config.html#DEFAULT.instances_path

Changed in nova:
status: New → Incomplete
Rodolfo (atomrag) wrote :

Hello, i use cinder backend.

Compute node use => KVM

Cinder volume by ISCSI and other node with cinder volume by FC(fiber channel).

Node use:
nova-compute and cinder-volume.

one context:

The compute use cinder with FC

and the other context:

the compute use cinder with iscsi, with storage in the same place. In this context work it without problem.

The only problem it's when use storage out of the local place because the compute disk it's small size.

For example, wih FC with storage I have more free size. Then, I only see size of the local storage in the compute and the api not count the free size of the storage. This's the only problem. for "patch" I use resources:DISK_GB=0, because the use the ratio doesn't workit.

excuse me my english.

Changed in nova:
status: Incomplete → New
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Bug attachments