Failed deployment with error message "Message: No valid host was found. , Code: 500"

Bug #1790622 reported by Juan Badia Payno
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tripleo
Incomplete
High
Unassigned

Bug Description

Deploying with tripleo-quickstart ended up on an overcloud failed deployment caused by "No valid host was found."

Steps to reproduce:
===================
bash quickstart.sh -X -T all -t all --retain-inventory --release master-tripleo-ci virthost

More info:
=========
The overcloud-deployment script was, the last -e /home/stack/params.yaml didnt do anything different
****
openstack overcloud deploy \
    --templates /usr/share/openstack-tripleo-heat-templates \
    --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --block-storage-flavor oooq_blockstorage --swift-storage-flavor oooq_objectstorage --timeout 90 -e /home/stack/cloud-names.yaml -e /home/stack/containers-prepare-parameter.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml -e /home/stack/enable-tls.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml -e /home/stack/inject-trust-anchor.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml --validation-warnings-fatal -e /usr/share/openstack-tripleo-heat-templates/ci/environments/ovb-ha.yaml -e /home/stack/params.yaml \
    "$@" && status_code=0 || status_code=$?
****

cat params.yaml
---
parameter_defaults:
  ControllerCount: 1
  ComputeCount: 1
  OvercloudControllerFlavor: oooq_control
  OvercloudControlFlavor: oooq_control
  OvercloudComputeFlavor: oooq_compute

cat failed_deployment_list.log
overcloud.Controller.0.Controller:
  resource_type: OS::TripleO::ControllerServer
  physical_resource_id: b87cc8a1-335c-4004-9dd6-7fe956c84968
  status: CREATE_FAILED
  status_reason: |
    ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. , Code: 500"
overcloud.Compute.0.NovaCompute:
  resource_type: OS::TripleO::ComputeServer
  physical_resource_id: d5969781-9383-485b-8133-f7d7a104dfe6
  status: CREATE_FAILED
  status_reason: |
    ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. , Code: 500"

openstack compute service list
+----+----------------+------------------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+----------------+------------------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | undercloud.localdomain | internal | enabled | up | 2018-09-04T10:40:32.000000 |
| 6 | nova-conductor | undercloud.localdomain | internal | enabled | up | 2018-09-04T10:40:37.000000 |
| 9 | nova-compute | undercloud.localdomain | nova | enabled | up | 2018-09-04T10:40:32.000000 |
+----+----------------+------------------------+----------+---------+-------+----------------------------+

openstack flavor list
+--------------------------------------+--------------------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+--------------------+------+------+-----------+-------+-----------+
| 2f73e480-9098-4293-9c5e-51c7659a8611 | oooq_compute | 8192 | 49 | 0 | 2 | True |
| 5ea65e6d-6c46-4719-800e-0b6678f4c0bb | oooq_objectstorage | 8192 | 49 | 0 | 2 | True |
| 6c06685b-5b9c-4fcb-ab26-22d6d22489cb | ceph-storage | 4096 | 40 | 0 | 1 | True |
| 83d3e222-060e-48e5-b31c-f5e22f1ce1d3 | control | 4096 | 40 | 0 | 1 | True |
| 85749e73-0a41-4031-b1be-d7235419e0b8 | baremetal | 4096 | 40 | 0 | 1 | True |
| 8d43f86b-8746-444c-b719-36e9824c448e | block-storage | 4096 | 40 | 0 | 1 | True |
| a28c10a7-8155-4b44-ac2e-87f67739a0e4 | compute | 4096 | 40 | 0 | 1 | True |
| dbb93448-bdf4-4847-b727-4bcfe661db6c | oooq_ceph | 8192 | 49 | 0 | 2 | True |
| e64b1943-dd7a-4174-869b-d9b6468fca55 | oooq_control | 8192 | 49 | 0 | 2 | True |
| ed0a803d-3465-48f4-98ec-70bc352bc781 | oooq_blockstorage | 8192 | 49 | 0 | 2 | True |
| f68faac3-76ae-4d24-a775-24cdbaa6307b | swift-storage | 4096 | 40 | 0 | 1 | True |
+--------------------------------------+--------------------+------+------+-----------+-------+-----------+

openstack baremetal node list
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| 62d40d2b-504d-41a5-81df-5902f27b6385 | control-0 | None | power off | available | False |
| bd0a9f9d-bc9d-4d3e-8bfd-4d0624536472 | compute-0 | None | power off | available | False |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+

nova list

+--------------------------------------+-------------------------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+----------+
| b87cc8a1-335c-4004-9dd6-7fe956c84968 | overcloud-controller-0 | ERROR | - | NOSTATE | |
| d5969781-9383-485b-8133-f7d7a104dfe6 | overcloud-novacompute-0 | ERROR | - | NOSTATE | |
+--------------------------------------+-------------------------+--------+------------+-------------+----------+

openstack baremetal node show -f json -c properties 62d40d2b-504d-41a5-81df-5902f27b6385
{
  "properties": {
    "memory_mb": "8192",
    "cpu_arch": "x86_64",
    "local_gb": "49",
    "cpus": "2",
    "capabilities": "profile:control,cpu_vt:true,cpu_hugepages:true,boot_option:local,cpu_aes:true,cpu_hugepages_1g:true"
  }

/var/log/containers/nova/nova-conductor.log
***
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Failed to schedule instances: NoValidHost_Remote: No valid host was found.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 154, in select_destinations
    raise exception.NoValidHost(reason="")

NoValidHost: No valid host was found.
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager Traceback (most recent call last):
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1206, in schedule_and_build_instances
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager instance_uuids, return_alternates=True)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 723, in _schedule_instances
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager return_alternates=return_alternates)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 907, in wrapped
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager return func(*args, **kwargs)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 53, in select_destinations
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager return getattr(self.instance, __name)(*args, **kwargs)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 158, in select_destinations
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager return cctxt.call(ctxt, 'select_destinations', **msg_args)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 179, in call
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager retry=self.retry)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 133, in _send
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager retry=retry)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 584, in send
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager call_monitor_timeout, retry=retry)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 575, in _send
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager raise result
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager NoValidHost_Remote: No valid host was found.
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager Traceback (most recent call last):
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager return func(*args, **kwargs)
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 154, in select_destinations
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager raise exception.NoValidHost(reason="")
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager NoValidHost: No valid host was found.
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager
2018-09-04 09:30:16.273 25 ERROR nova.conductor.manager
2018-09-04 09:30:16.279 25 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-09-04 09:30:16.280 25 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-09-04 09:30:16.287 25 DEBUG oslo_db.sqlalchemy.engines [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308
2018-09-04 09:30:16.386 25 DEBUG nova.conductor.manager [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] [instance: 0d49e1ec-f350-4c47-a400-45472f503966] block_device_mapping [BlockDeviceMapping(attachment_id=<?>,boot_index=0,connection_info=None,created_at=<?>,delete_on_termination=True,deleted=<?>,deleted_at=<?>,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=<?>,image_id='891b95c6-9388-44e9-ad21-32f96ef76cd9',instance=<?>,instance_uuid=<?>,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=<?>,uuid=<?>,volume_id=None,volume_size=None)] _create_block_device_mapping /usr/lib/python2.7/site-packages/nova/conductor/manager.py:1107
2018-09-04 09:30:16.387 25 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-09-04 09:30:16.388 25 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-09-04 09:30:16.395 25 WARNING nova.scheduler.utils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Failed to compute_task_build_instances: No valid host was found.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 154, in select_destinations
    raise exception.NoValidHost(reason="")

NoValidHost: No valid host was found.
: NoValidHost_Remote: No valid host was found
***

grep req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 /var/log/containers/nova/nova-api.log
***
2018-09-04 09:30:14.111 19 DEBUG nova.api.openstack.wsgi [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Action: 'create', calling method: <bound method ServersController.create of <nova.api.openstack.compute.servers.ServersController object at 0x7f693bba4890>>, body: {"server": {"name": "overcloud-novacompute-0", "imageRef": "891b95c6-9388-44e9-ad21-32f96ef76cd9", "key_name": "default", "flavorRef": "2f73e480-9098-4293-9c5e-51c7659a8611", "user_data": "Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zOTY0MDA1MDE4NzU0NjM1ODk1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM5NjQwMDUwMTg3NTQ2MzU4OTU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzk2NDAwNTAxODc1NDYzNTg5NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvYmluL2Jhc2gKCiMgRklYTUUoc2hhZG93ZXIpIHRoaXMgaXMgYSB3b3JrYXJvdW5kIGZvciBjbG91ZC1pbml0IDAuNi4zIHByZXNlbnQgaW4gVWJ1bnR1CiMgMTIuMDQgTFRTOgojIGh0dHBzOi8vYnVncy5sYXVuY2hwYWQubmV0L2hlYXQvK2J1Zy8xMjU3NDEwCiMKIyBUaGUgb2xkIGNsb3VkLWluaXQgZG9lc24ndCBjcmVhdGUgdGhlIHVzZXJzIGRpcmVjdGx5IHNvIHRoZSBjb21tYW5kcyB0byBkbwojIHRoaXMgYXJlIGluamVjdGVkIHRob3VnaCBub3ZhX3V0aWxzLnB5LgojCiMgT25jZSB3ZSBkcm9wIHN1cHBvcnQgZm9yIDAuNi4zLCB3ZSBjYW4gc2FmZWx5IHJlbW92ZSB0aGlzLgoKCiMgaW4gY2FzZSBoZWF0LWNmbnRvb2xzIGhhcyBiZWVuIGluc3RhbGxlZCBmcm9tIHBhY2thZ2UgYnV0IG5vIHN5bWxpbmtzCiMgYXJlIHlldCBpbiAvb3B0L2F3cy9iaW4vCmNmbi1jcmVhdGUtYXdzLXN5bWxpbmtzCgojIERvIG5vdCByZW1vdmUgLSB0aGUgY2xvdWQgYm9vdGhvb2sgc2hvdWxkIGFsd2F5cyByZXR1cm4gc3VjY2VzcwpleGl0IDAKCi0tPT09PT09PT09PT09PT09Mzk2NDAwNTAxODc1NDYzNTg5NT09CkNvbnRlbnQtVHlwZTogdGV4dC9wYXJ0LWhhbmRsZXI7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJwYXJ0LWhhbmRsZXIucHkiCgojIHBhcnQtaGFuZGxlcgojCiMgICAgTGljZW5zZWQgdW5kZXIgdGhlIEFwYWNoZSBMaWNlbnNlLCBWZXJzaW9uIDIuMCAodGhlICJMaWNlbnNlIik7IHlvdSBtYXkKIyAgICBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLiBZb3UgbWF5IG9idGFpbgojICAgIGEgY29weSBvZiB0aGUgTGljZW5zZSBhdAojCiMgICAgICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKIwojICAgIFVubGVzcyByZXF1aXJlZCBieSBhcHBsaWNhYmxlIGxhdyBvciBhZ3JlZWQgdG8gaW4gd3JpdGluZywgc29mdHdhcmUKIyAgICBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLCBXSVRIT1VUCiMgICAgV0FSUkFOVElFUyBPUiBDT05ESVRJT05TIE9GIEFOWSBLSU5ELCBlaXRoZXIgZXhwcmVzcyBvciBpbXBsaWVkLiBTZWUgdGhlCiMgICAgTGljZW5zZSBmb3IgdGhlIHNwZWNpZmljIGxhbmd1YWdlIGdvdmVybmluZyBwZXJtaXNzaW9ucyBhbmQgbGltaXRhdGlvbnMKIyAgICB1bmRlciB0aGUgTGljZW5zZS4KCmltcG9ydCBkYXRldGltZQppbXBvcnQgZXJybm8KaW1wb3J0IG9zCmltcG9ydCBzeXMKCgpkZWYgbGlzdF90eXBlcygpOgogICAgcmV0dXJuKFsidGV4dC94LWNmbmluaXRkYXRhIl0pCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zOTY0MDA1MDE4NzU0NjM1ODk1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZQoKI2Nsb3VkLWNvbmZpZwpzc2hfYXV0aG9yaXplZF9rZXlzOiBbc3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDdENFcXhHaVRzbjZBRkRHOXQ1U0xJcGlNVTRxUFhqdURsdGRKdzM4K05IeHlWQUpTN1A4d2tSeCtUK2ZVc1JnL3k1dFlVS2QvdklJcXJRVXV3SGY1Z2pseDg0bzRDbm01VU5QZUVORVVuOTB4RGFBMFRIb0s1V1g0UjFaanMrTG5ObW5WaW5hNjF0MWR2aGhtTVZtVTFPWHViYVBnT3RjN2thamlMZ2Rtck5kb3hENGN2eTJNYnhtYXowOEVFM2NxdDJiYjljSk4zeFlnRWMvWHA5Nk1HUVFCZjVxbnFjdG43Uks1VkJuS3RZVHI0d3dXeWg3WTlDZ1o2bmYwR1pUeFhiMEdXM3psbDJpMHBTbXdLSjdYN0YyRlpIOEF0a0dpTTlGSTMzVkhKeGxiUmxXU29GVEV6ZEdDR0kzbkljeC91SURQWkpSYTNxSFliN2tnUWE2U0IKICAgIEdlbmVyYXRlZCBieSBUcmlwbGVPXQp1c2VyOiBoZWF0LWFkbWluCgotLT09PT09PT09PT09PT09PTM5NjQwMDUwMTg3NTQ2MzU4OTU9PQpDb250ZW50LVR5cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lCgoKLS09PT09PT09PT09PT09PT0zOTY0MDA1MDE4NzU0NjM1ODk1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZQoKCi0tPT09PT09PT09PT09PT09Mzk2NDAwNTAxODc1NDYzNTg5NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLWluaXQtZGF0YSIKCnsiZGVwbG95bWVudHMiOiBbXSwgIm9zLWNvbGxlY3QtY29uZmlnIjogeyJzcGxheSI6IDMwLCAicmVxdWVzdCI6IHsibWV0YWRhdGFfdXJsIjogImh0dHBzOi8vMTkyLjE2OC4yNC4yOjEzODA4L3YxL0FVVEhfODg5ZDRiMjJmMDYzNDhjZjkzNjA1ZGU3OTQ5MjcyYTQvb3YtYTVoc2dxbGhtZS0wLTJxN2NobGRpNHlpYS1Ob3ZhQ29tcHV0ZS12ZzJnbmh2cHNnemUvOTlkYTc4N2MtYjFhOC00YTgwLTk2M2YtM2JlYjNmOGFjN2JlP3RlbXBfdXJsX3NpZz1hNDI2OTE0NThmM2MzMmYwOWNjZDUwNzY0NmRhMGRkZDA1OGRiMmEzJnRlbXBfdXJsX2V4cGlyZXM9MjE0NzQ4MzU4NiJ9LCAiY29tbWFuZCI6ICJvcy1yZWZyZXNoLWNvbmZpZyAtLXRpbWVvdXQgMTQ0MDAiLCAiY29sbGVjdG9ycyI6IFsiZWMyIiwgInJlcXVlc3QiLCAibG9jYWwiXX19Ci0tPT09PT09PT09PT09PT09Mzk2NDAwNTAxODc1NDYzNTg5NT09LS0=", "max_count": 1, "min_count": 1, "networks": [{"uuid": "fbc460b7-2a26-4bea-90f7-665904ec920e", "port": "f04a7f0a-465b-4884-9b74-930eaf2bad7a"}]}} _process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:615
2018-09-04 09:30:14.421 19 DEBUG nova.network.neutronv2.api [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] validate_networks() for [(None, None, u'f04a7f0a-465b-4884-9b74-930eaf2bad7a', None)] validate_networks /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2075
2018-09-04 09:30:15.474 19 DEBUG nova.quota [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Getting quotas for project 889d4b22f06348cf93605de7949272a4. Resources: ['metadata_items'] _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:408
2018-09-04 09:30:15.477 19 DEBUG nova.quota [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Getting quotas for user 01dadd2ab2f54518a34ed57780bf36f8 and project 889d4b22f06348cf93605de7949272a4. Resources: ['metadata_items'] _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:400
2018-09-04 09:30:15.489 19 DEBUG nova.quota [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Getting quotas for project 889d4b22f06348cf93605de7949272a4. Resources: ['injected_files'] _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:408
2018-09-04 09:30:15.492 19 DEBUG nova.quota [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Getting quotas for user 01dadd2ab2f54518a34ed57780bf36f8 and project 889d4b22f06348cf93605de7949272a4. Resources: ['injected_files'] _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:400
2018-09-04 09:30:15.512 19 DEBUG nova.quota [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Getting quotas for project 889d4b22f06348cf93605de7949272a4. Resources: ['injected_file_content_bytes', 'injected_file_path_bytes'] _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:408
2018-09-04 09:30:15.515 19 DEBUG nova.quota [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Getting quotas for user 01dadd2ab2f54518a34ed57780bf36f8 and project 889d4b22f06348cf93605de7949272a4. Resources: ['injected_file_content_bytes', 'injected_file_path_bytes'] _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:400
2018-09-04 09:30:15.522 19 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-09-04 09:30:15.522 19 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-09-04 09:30:15.524 19 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "5f6dcbeb-6b9b-49e9-9700-3e666beb4935" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-09-04 09:30:15.524 19 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "5f6dcbeb-6b9b-49e9-9700-3e666beb4935" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-09-04 09:30:15.532 19 DEBUG nova.quota [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Getting quotas for project 889d4b22f06348cf93605de7949272a4. Resources: set(['instances', 'ram', 'cores']) _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:408
2018-09-04 09:30:15.535 19 DEBUG nova.quota [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Getting quotas for user 01dadd2ab2f54518a34ed57780bf36f8 and project 889d4b22f06348cf93605de7949272a4. Resources: set(['instances', 'ram', 'cores']) _get_quotas /usr/lib/python2.7/site-packages/nova/quota.py:400
2018-09-04 09:30:15.541 19 DEBUG nova.compute.api [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Going to run 1 instances... _provision_instances /usr/lib/python2.7/site-packages/nova/compute/api.py:873
2018-09-04 09:30:15.552 19 DEBUG nova.compute.api [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] [instance: 0d49e1ec-f350-4c47-a400-45472f503966] block_device_mapping [BlockDeviceMapping(attachment_id=<?>,boot_index=0,connection_info=None,created_at=<?>,delete_on_termination=True,deleted=<?>,deleted_at=<?>,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=<?>,image_id='891b95c6-9388-44e9-ad21-32f96ef76cd9',instance=<?>,instance_uuid=<?>,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=<?>,uuid=<?>,volume_id=None,volume_size=None)] _bdm_validate_set_size_and_instance /usr/lib/python2.7/site-packages/nova/compute/api.py:1339
2018-09-04 09:30:15.590 19 INFO nova.api.openstack.requestlog [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] 192.168.24.1 "POST /v2.1/servers" status: 202 len: 380 microversion: 2.65 time: 1.481040
2018-09-04 09:30:16.132 19 DEBUG nova.api.openstack.wsgi [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Calling method '<bound method Versions.index of <nova.api.openstack.compute.versions.Versions object at 0x7f693bf92fd0>>' _process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:618
2018-09-04 09:30:16.132 19 INFO nova.api.openstack.requestlog [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] 192.168.24.1 "OPTIONS /" status: 200 len: 407 microversion: - time: 0.000911

***

grep req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 /var/log/containers/nova/nova-scheduler.log
***
2018-09-04 09:30:15.637 27 DEBUG nova.scheduler.manager [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Starting to schedule for instances: [u'0d49e1ec-f350-4c47-a400-45472f503966'] select_destinations /usr/lib/python2.7/site-packages/nova/scheduler/manager.py:117
2018-09-04 09:30:15.638 27 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "placement_client" acquired by "nova.scheduler.client.report._create_client" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-09-04 09:30:15.643 27 DEBUG oslo_concurrency.lockutils [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Lock "placement_client" released by "nova.scheduler.client.report._create_client" :: held 0.005s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-09-04 09:30:16.258 27 DEBUG nova.scheduler.manager [req-1ec31b6e-cd3f-40b4-84e4-e878e3799de7 01dadd2ab2f54518a34ed57780bf36f8 889d4b22f06348cf93605de7949272a4 - default default] Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up. select_destinations /usr/lib/python2.7/site-packages/nova/scheduler/manager.py:150

***

Tags: quickstart ux
Changed in tripleo:
status: New → Triaged
importance: Undecided → Medium
milestone: none → stein-1
Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

That's exactly the issue I've also describing in https://bugs.launchpad.net/tripleo/+bug/1788875, it seems! This is really confusing, especially for side users trying tripleo quickstart.

tags: added: quickstart ux
Changed in tripleo:
importance: Medium → High
Changed in tripleo:
milestone: stein-1 → stein-2
Changed in tripleo:
milestone: stein-2 → stein-3
Changed in tripleo:
milestone: stein-3 → stein-rc1
Changed in tripleo:
milestone: stein-rc1 → train-1
Changed in tripleo:
milestone: train-1 → train-2
Changed in tripleo:
milestone: train-2 → train-3
Changed in tripleo:
milestone: train-3 → ussuri-1
Changed in tripleo:
milestone: ussuri-1 → ussuri-2
wes hayutin (weshayutin)
Changed in tripleo:
milestone: ussuri-2 → ussuri-3
wes hayutin (weshayutin)
Changed in tripleo:
status: Triaged → Incomplete
wes hayutin (weshayutin)
Changed in tripleo:
milestone: ussuri-3 → ussuri-rc3
wes hayutin (weshayutin)
Changed in tripleo:
milestone: ussuri-rc3 → victoria-1
Changed in tripleo:
milestone: victoria-1 → victoria-3
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.