Instance can not be launched [no valid host was found]

Bug #1320261 reported by Julia Aranovich
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Invalid
Medium
Fuel QA Team
6.1.x
Invalid
Medium
Fuel QA Team

Bug Description

{"build_id": "2014-05-16_01-10-31", "mirantis": "yes", "build_number": "208", "ostf_sha": "353f918197ec53a00127fd28b9151f248a2a2d30", "nailgun_sha": "89e7224f0b87284f75bf02b956b640768fabe352", "production": "docker", "api": "1.0", "fuelmain_sha": "861f4410bd07ecf40c1f87785dd0c86de387f2cf", "astute_sha": "c0418e1739cceb864fd2c548a7c6b9ad6a46ed86", "release": "5.0", "fuellib_sha": "68ef90fdc9fe5a3978f756a96a3fa29d5f1a2929"}

Environment settings: CentOS, HA, KVM, Nova-network (Flat), networks tagging - yes.
Nodes: 3 controller, 1 compute, 1 cinder.
Additional services: -.

The env deployed succesfully but functional OSTF tests failed: instance can not get Active status.
In horizon instance got an 'error' status with the following warning: "Error: Failed to launch instance "JA": Please try again later [Error: No valid host was found. ]. "

Revision history for this message
Julia Aranovich (jkirnosova) wrote :
Revision history for this message
Mike Scherbakov (mihgen) wrote :

This usually happen if your compute host didn't have enough memory for instance to launch. Default m1.tiny requires 512 Mb free on compute. If you have 1Gb for the whole compute node, it is unlikely that you have at least 512 for VMs.

Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

Please elaborate either the root cause was the one Mike had suggested. If so, the ticket should be marked as an invalid. For now, I set it as incomplete (elaboration is required)

Changed in fuel:
status: New → Incomplete
importance: Undecided → High
Revision history for this message
Andrey Sledzinskiy (asledzinskiy) wrote :

Such problem occurs in system tests run on CI:

http://jenkins-product.srt.mirantis.net:8080/view/0_0_swarm/job/master_fuelmain.system_test.centos.thread_4/61/testReport/%28root%29/ceph_rados_gw/ceph_rados_gw/

http://jenkins-product.srt.mirantis.net:8080/view/0_0_swarm/job/master_fuelmain.system_test.ubuntu.thread_1/62/testReport/junit/%28root%29/ceph_multinode_with_cinder/ceph_multinode_with_cinder/

Error in logs - Insufficient free space for volume creation (requested / avail): 1/0.0
2014-05-19T07:50:47.101944+00:00 debug: 2014-05-19 07:50:43.822 21458 ERROR cinder.scheduler.flows.create_volume [req-d3e5b9c9-6cb2-474f-bb2c-7056251d0585 22c3f2d45c184a58beff19859727d356 ad891766eb3b439f9cd0266054ac7452 - - -] Failed to schedule_create_volume: No valid host was found.
2014-05-19T07:50:47.101944+00:00 debug: 2014-05-19 07:50:43.823 21458 DEBUG cinder.volume.flows.common [req-d3e5b9c9-6cb2-474f-bb2c-7056251d0585 22c3f2d45c184a58beff19859727d356 ad891766eb3b439f9cd0266054ac7452 - - -] Updating volume: 11b5f310-187b-4aa5-8c6f-048d48548f50 with {'status': 'error'} due to: No valid host was found. error_out_volume /usr/lib/python2.6/site-packages/cinder/volume/flows/common.py:87

But if we revert environment and run OSTF test on volume creation - it passes, and manual creation of volume also succeeds

Logs are attached

Revision history for this message
Andrey Sledzinskiy (asledzinskiy) wrote :
Revision history for this message
Andrey Sledzinskiy (asledzinskiy) wrote :
Changed in fuel:
status: Incomplete → New
Revision history for this message
Vitaly Kramskikh (vkramskikh) wrote :

Got the same error with compute with 1.2gb ram

Revision history for this message
Julia Aranovich (jkirnosova) wrote :

I reproduced this failure on the following env: Ubuntu, Multi-node, KVM, Nova-network (VLAN), network tagging - yes. Nodes: 1 controller (2GB RAM), 1 compute (2GB RAM), 2 ceph (1GB RAM).
Tried to launch instance m1.tiny flavor.

Revision history for this message
Julia Aranovich (jkirnosova) wrote :

Attach a snapshot of last env the bug was reproduced.

Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

Could be related to the amount of free CPUs as well
rgrep "Free VCPUS" .
./node-2.domain.tld/nova-compute.log:2014-05-19T11:05:19.927783+00:00 debug: 2014-05-19 11:05:19.715 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 1
./node-2.domain.tld/nova-compute.log:2014-05-19T11:06:29.953077+00:00 debug: 2014-05-19 11:06:20.005 24181 AUDIT nova.compute.resource_tracker [req-bc2a01e3-b14c-45d6-b7b1-9400a1aad030 None] Free VCPUS: 0
./node-2.domain.tld/nova-compute.log:2014-05-19T11:07:30.003649+00:00 debug: 2014-05-19 11:07:20.421 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: -2
./node-2.domain.tld/nova-compute.log:2014-05-19T11:08:30.039844+00:00 debug: 2014-05-19 11:08:20.733 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 1
./node-2.domain.tld/nova-compute.log:2014-05-19T11:09:30.092597+00:00 debug: 2014-05-19 11:09:20.967 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0
./node-2.domain.tld/nova-compute.log:2014-05-19T11:10:20.136326+00:00 debug: 2014-05-19 11:10:19.242 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0
./node-2.domain.tld/nova-compute.log:2014-05-19T11:10:30.142186+00:00 debug: 2014-05-19 11:10:21.079 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0
./node-2.domain.tld/nova-compute.log:2014-05-19T11:11:30.173096+00:00 debug: 2014-05-19 11:11:21.334 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0
./node-2.domain.tld/nova-compute.log:2014-05-19T11:12:30.213471+00:00 debug: 2014-05-19 11:12:21.616 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0
./node-2.domain.tld/nova-compute.log:2014-05-19T11:13:30.255292+00:00 debug: 2014-05-19 11:13:21.847 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0
./node-2.domain.tld/nova-compute.log:2014-05-19T11:14:30.307783+00:00 debug: 2014-05-19 11:14:22.095 24181 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0

Changed in fuel:
status: New → Confirmed
status: Confirmed → Triaged
Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

Solution suggested : try to increase number of vCPU for compute nodes and reproduce the issue again

Changed in fuel:
assignee: Fuel Library Team (fuel-library) → Fuel QA Team (fuel-qa)
Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

Enabled core filters https://bugs.launchpad.net/fuel/+bug/1300027 should be a root cause for this issue in case of free vcpus =0

Revision history for this message
Mike Scherbakov (mihgen) wrote :

From fuel-snapshot-2014-05-19_11-14-48.tgz , I found zero messages like "No valid host was found", but there is exception:
libvirtError: internal error no supported architecture for os type 'hvm'
in nova-compute log on node-2.

Revision history for this message
Mike Scherbakov (mihgen) wrote :

Same issue in the very first diagsnapshot found: libvirtError: internal error no supported architecture for os type 'hvm'

Revision history for this message
Mike Scherbakov (mihgen) wrote :

Andrey's issue is different, and for cinder volumes instead, being addressed as Bug #1320248.

This one is actually Won't fix - attempt to run KVM VMs inside of OpenStack while nested virtualization is not enabled. It should work if you try to use qemu virt type for Nova. Please reopen if it still fails with qemu.

Changed in fuel:
status: Triaged → Won't Fix
Revision history for this message
Tatyana Dubyk (tdubyk) wrote :

Instance cannot be launch from volume

I reproduced this failure on the following env:
Ubuntu, HA, vcenter, Nova-network (flat dhcp). 3 Nodes: 1 controller + 1 cinder ( Total 4GB RAM)
Tried to launch instance m1.tiny flavor.
----------------------------------------------------------------------------------------------------

Volume Details: test_vol

    Overview
Volume Overview
Info
Name
    test_vol
ID
    90cbba5f-538e-449f-a91a-e21567e0164c
Description
    test
Status
    Available
Specs
Size
    1 GB
Created
    Feb. 6, 2015, 5:09 p.m.
Attachments
Attached To
    Not attached
Metadata
readonly
    False

----------------------------------------------------------------------------------------------------
all services on each of controller have 'smile' status
no errors in Astute logs
----------------------------------------------------------------------------------------------------
[root@nailgun ~]# fuel --fuel-version
api: '1.0'
astute_sha: ef8aa0fd0e3ce20709612906f1f0551b5682a6ce
auth_required: true
build_id: 2015-02-05_20-51-08
build_number: '68'
feature_groups:
- mirantis
fuellib_sha: c3912b24e58e3d3a86ba77c25ee7b1ade2ea572c
fuelmain_sha: 2d15e4e6da8970d1c61eebbfdfbd5f49d14b23ac
nailgun_sha: ac0b2eca001750178b3305e4a958050be5b2634a
ostf_sha: df7ea052abd77148cfd0edd453bc5ff572b82cdc
production: docker
release: 5.1.2
release_versions:
  2014.1.3-5.1.2:
    VERSION:
      api: '1.0'
      astute_sha: ef8aa0fd0e3ce20709612906f1f0551b5682a6ce
      build_id: 2015-02-05_20-51-08
      build_number: '68'
      feature_groups:
      - mirantis
      fuellib_sha: c3912b24e58e3d3a86ba77c25ee7b1ade2ea572c
      fuelmain_sha: 2d15e4e6da8970d1c61eebbfdfbd5f49d14b23ac
      nailgun_sha: ac0b2eca001750178b3305e4a958050be5b2634a
      ostf_sha: df7ea052abd77148cfd0edd453bc5ff572b82cdc
      production: docker
      release: 5.1.2

Changed in fuel:
status: Won't Fix → Confirmed
milestone: 5.0 → 5.1.2
Revision history for this message
Tatyana Dubyk (tdubyk) wrote :

settings for vcenter:
export VCENTER_IP= 172.16.0.254
export VCENTER_USERNAME= <email address hidden>
export VCENTER_PASSWORD= Qwer!1234
export VCENTER_CLUSTERS= Cluster1,Cluster2

Mike Scherbakov (mihgen)
no longer affects: fuel/6.0.x
Mike Scherbakov (mihgen)
no longer affects: fuel/6.0.x
Mike Scherbakov (mihgen)
no longer affects: fuel/4.1.x
no longer affects: fuel/5.0.x
Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

@Tatyana, please provide a logs

no longer affects: fuel/6.0.x
Revision history for this message
Tatyanka (tatyana-leontovich) wrote :

move to invalid for 6.1, we have such tests in ostf and it is passed during all our release(6.1)

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.