ironic report a bug (No valid host was found)

Bug #1867013 reported by zhangss
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla-ansible
Invalid
Medium
Mark Goddard
Stein
Triaged
Medium
Mark Goddard

Bug Description

os: centos7.6
deploy_method: multinode
controller(contain components):nova ironic neutron...

cat globals.yml
kolla_base_distro: "centos"
kolla_install_type: "binary"
openstack_release: "stein"
kolla_internal_vip_address: "10.16.138.233"
network_interface: "enp2s0f0"
tunnel_interface: "enp2s0f2"
neutron_external_interface: "enp2s0f3"
enable_ironic: "yes"
enable_nova_serialconsole_proxy: "yes"
ironic_dnsmasq_dhcp_range: "10.16.138.235,10.16.138.240"
ironic_dnsmasq_default_gateway: "10.16.138.1"
ironic_cleaning_network: "4a6c5c9a-0875-4332-8727-320b20b2d8fe"
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:

enp2s0f0: 10.16.138.0/24
enp2s0f2: vm's internal network (vxlan)
enp2s0f3: vm's external network (10.16.146.0-10.16.146.100) and baremetal node(ironic's compute node) service network (10.16.146.200-10.16.146.245) 10.16.146.0/24

vm state is ERROR
{u'message': u'No valid host was found. ', u'code': 500, u'details': u'Traceback (most recent call last):\n File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1368, in schedule_and_build_instances\n instance_uuids, return_alternates=True)\n File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 822, in _schedule_instances\n return_alternates=return_alternates)\n File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations\n instance_uuids, return_objects, return_alternates)\n File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 160, in select_destinations\n return cctxt.call(ctxt, \'select_destinations\', **msg_args)\n File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 178, in call\n retry=self.retry)\n File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 128, in _send\n retry=retry)\n File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 645, in send\n call_monitor_timeout, retry=retry)\n File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send\n raise result\nNoValidHost_Remote: No valid host was found. \nTraceback (most recent call last):\n\n File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner\n return func(*args, **kwargs)\n\n File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 156, in select_destinations\n raise exception.NoValidHost(reason="")\n\nNoValidHost: No valid host was found. \n\n', u'created': u'2020-03-11T14:26:02Z'}

nova-conduct.log
NoValidHost: No valid host was found.
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager Traceback (most recent call last):
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1368, in schedule_and_build_instances
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager instance_uuids, return_alternates=True)
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 822, in _schedule_instances
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager return_alternates=return_alternates)
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates)
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 160, in select_destinations
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager return cctxt.call(ctxt, 'select_destinations', **msg_args)
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 178, in call
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager retry=self.retry)
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 128, in _send
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager retry=retry)
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 645, in send
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager call_monitor_timeout, retry=retry)
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager raise result
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager NoValidHost_Remote: No valid host was found.
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager Traceback (most recent call last):
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager return func(*args, **kwargs)
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 156, in select_destinations
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager raise exception.NoValidHost(reason="")
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager NoValidHost: No valid host was found.
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager
2020-03-11 22:26:02.131 25 ERROR nova.conductor.manager
2020-03-11 22:26:02.182 25 WARNING nova.scheduler.utils [req-914841ee-355a-42ae-bd7b-443f0b78b22d b824308b03d441d382a7227096f3fa89 78bc0643b1f84cf280694331b6e39dcf - default default] Failed to compute_task_build_instances: No valid host was found.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 156, in select_destinations
    raise exception.NoValidHost(reason="")

Tags: docs
Revision history for this message
zhangss (intentc) wrote :
Download full text (6.7 KiB)

supplement:
(virtualenv) [root@openstack01 ~]# openstack flavor list
+--------------------------------------+---------------------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------------------+-------+------+-----------+-------+-----------+
| | | | | | | |
| e5a95e46-c9b7-432f-b376-ec31c7f0eebc | my-baremetal-flavor | 49152 | 477 | 0 | 16 | True |
+--------------------------------------+---------------------+-------+------+-----------+-------+-----------+
(virtualenv) [root@openstack01 ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 73:90:72:75:b8:8e:02:a0:36:4f:8a:23:5f:ca:f2:3d |
+-------+-------------------------------------------------+
(virtualenv) [root@openstack01 ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 73:90:72:75:b8:8e:02:a0:36:4f:8a:23:5f:ca:f2:3d |
+-------+-------------------------------------------------+
(virtualenv) [root@openstack01 ~]# openstack flavor list
+--------------------------------------+---------------------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------------------+-------+------+-----------+-------+-----------+
| | | | | | | |
| e5a95e46-c9b7-432f-b376-ec31c7f0eebc | my-baremetal-flavor | 49152 | 477 | 0 | 16 | True |
+--------------------------------------+---------------------+-------+------+-----------+-------+-----------+
(virtualenv) [root@openstack01 ~]# openstack network list
+--------------------------------------+---------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+--------------------------------------+
| 4a6c5c9a-0875-4332-8727-320b20b2d8fe | ext-net | 6321c2df-f9b0-43f6-91ad-9e5728a16cda |
| 96ee5cc1-988a-46ac-9b93-8381daee18eb | int-net | ebfd2b2b-f47b-4581-ad49-fab89647482f |
+--------------------------------------+---------+--------------------------------------+

(virtualenv) [root@openstack01 ~]# openstack baremetal driver list
+---------------------+---------------------------------------+
| Supported driver(s) | Active host(s) |
+---------------------+---------------------------------------+
| ipmi | openstack03, openstack02, openstack01 |
+---------------------+---------------------------------------+
(virtualenv) [root@openstack01 ~]# openstack baremetal...

Read more...

Revision history for this message
Mark Goddard (mgoddard) wrote :

This is a very common error and could be caused by many things. I would suggest checking the nova-compute-ironic.log file, and ironic-conductor.log.

Revision history for this message
zhangss (intentc) wrote :
Download full text (5.7 KiB)

nova-compute-ironic.log
2020-03-11 17:46:41.210 6 INFO nova.compute.manager [req-78a3945b-b3dc-4a2f-9871-d11dc3728c71 - - - - -] Deleting orphan compute node 19 hypervisor host is d584abc7-602b-4d09-b066-74347642d478, nodes are set([])
2020-03-11 17:46:41.270 6 INFO nova.scheduler.client.report [req-78a3945b-b3dc-4a2f-9871-d11dc3728c71 - - - - -] Deleted resource provider d584abc7-602b-4d09-b066-74347642d478
2020-03-11 17:47:41.155 6 ERROR nova.compute.manager [req-78a3945b-b3dc-4a2f-9871-d11dc3728c71 - - - - -] No compute node record for host openstack03-ironic: ComputeHostNotFound_Remote: Compute host openstack03-ironic could not be found.
2020-03-11 17:48:43.151 6 ERROR nova.compute.manager [req-78a3945b-b3dc-4a2f-9871-d11dc3728c71 - - - - -] No compute node record for host openstack03-ironic: ComputeHostNotFound_Remote: Compute host openstack03-ironic could not be found.
2020-03-11 17:49:44.156 6 ERROR nova.compute.manager [req-78a3945b-b3dc-4a2f-9871-d11dc3728c71 - - - - -] No compute node record for host openstack03-ironic: ComputeHostNotFound_Remote: Compute host openstack03-ironic could not be found.
2020-03-11 17:50:44.154 6 ERROR nova.compute.manager [req-78a3945b-b3dc-4a2f-9871-d11dc3728c71 - - - - -] No compute node record for host openstack03-ironic: ComputeHostNotFound_Remote: Compute host openstack03-ironic could not be found.
2020-03-11 17:50:44.232 6 WARNING nova.compute.resource_tracker [req-78a3945b-b3dc-4a2f-9871-d11dc3728c71 - - - - -] No compute node record for openstack03-ironic:e73df8d1-dd61-4c44-ab90-c2d4c59417be: ComputeHostNotFound_Remote: Compute host openstack03-ironic could not be found.
2020-03-11 17:50:44.258 6 INFO nova.compute.resource_tracker [req-78a3945b-b3dc-4a2f-9871-d11dc3728c71 - - - - -] Compute node record created for openstack03-ironic:e73df8d1-dd61-4c44-ab90-c2d4c59417be with uuid: e73df8d1-dd61-4c44-ab90-c2d4c59417be
2020-03-11 17:50:44.488 6 INFO nova.scheduler.client.report [req-78a3945b-b3dc-4a2f-9871-d11dc3728c71 - - - - -] [req-9cf95752-1fad-4d2b-915e-4d7d55d2c553] Created resource provider record via placement API for resource provider with UUID e73df8d1-dd61-4c44-ab90-c2d4c59417be and name e73df8d1-dd61-4c44-ab90-c2d4c59417be.
2020-03-11 18:08:57.686 6 ERROR oslo_messaging.rpc.server [req-8fcf3f81-a449-4a58-a64b-080909d06788 b824308b03d441d382a7227096f3fa89 78bc0643b1f84cf280694331b6e39dcf - default default] Exception during message handling: NotImplementedError
2020-03-11 18:08:57.686 6 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2020-03-11 18:08:57.686 6 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, in _process_incoming
2020-03-11 18:08:57.686 6 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2020-03-11 18:08:57.686 6 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, in dispatch
2020-03-11 18:08:57.686 6 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2020-03-11 18:08:57.686 6 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging...

Read more...

Revision history for this message
Mark Goddard (mgoddard) wrote :

Have you registered ironic nodes, and moved them to the available state? Do they have a 'resource_class' field set? Does the flavor have the required extra_specs to match the node's resource class and zero out the standard resource classes?

https://docs.openstack.org/ironic/stein/install/configure-nova-flavors.html

openstack hypervisor list will also tell you if the nodes are available for use by nova.

Revision history for this message
zhangss (intentc) wrote :
Download full text (15.4 KiB)

I have registered the ironic's node and associated the resource class,Here are the details.
(virtualenv) [root@openstack01 ~]# openstack baremetal node show baremetal-node
+------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_uuid | None |
| automated_clean | None |
| bios_interface | no-bios |
| boot_interface | pxe |
| chassis_uuid | None |
| clean_step | {} |
| conductor | openstack02 |
| conductor_group | |
| console_enabled | False ...

Revision history for this message
Mark Goddard (mgoddard) wrote :

Enabling debug logging in nova scheduler can help to determine why it is not selecting a host.

Revision history for this message
zhangss (intentc) wrote :
Download full text (13.9 KiB)

nova schedule report a error:
Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up.
I'm find that the node resources are sufficient.
Baremetal node is individual node.
(virtualenv) [root@openstack01 ~]# openstack baremetal node show baremetal-node
+------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_uuid | None |
| automated_clean | None |
| bios_interface | no-bios |
| boot_interface | pxe |
| chassis_uuid | None |
| clean_step | {} |
| conductor | openstack02 |
| conductor_group | ...

Revision history for this message
Mark Goddard (mgoddard) wrote :

You could try installing osc-placement and checking if the resource providers for the nodes have the correct inventory.

It also looks like you haven't zeroed out the standard resource classes:

$ openstack flavor set --property resources:VCPU=0 my-baremetal-flavor
$ openstack flavor set --property resources:MEMORY_MB=0 my-baremetal-flavor
$ openstack flavor set --property resources:DISK_GB=0 my-baremetal-flavor

From https://docs.openstack.org/ironic/stein/install/configure-nova-flavors.html

Revision history for this message
zhangss (intentc) wrote :

I actually deploy and use ironic in the way of kolla-anisble deployment, as shown in the following link doc.
https://docs.openstack.org/kolla-ansible/stein/reference/bare-metal/ironic-guide.html
And i can't find zeroed out the standard resource classes in this doc.
Could you tell me to where can i download osc-placement client?

Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

You can download it from pip.

Your problem is likely that of not-zeroed values.

@Mark https://review.opendev.org/713794 missed Stein but I guess needs amending for it?

Changed in kolla-ansible:
assignee: nobody → Mark Goddard (mgoddard)
importance: Undecided → Medium
status: New → Invalid
tags: added: docs
Revision history for this message
zhangss (intentc) wrote :

 187377 2020-03-19 18:02:26.904 26 DEBUG nova.scheduler.filters.ram_filter [req-944b91df-b606-4045-843d-4104482e2fb0 b824308b03d441d382a72270 96f3fa89 78bc0643b1f84cf280694331b6e39dcf - default default] (openstack03-ironic, 9c9bdc9a-eb1d-4ec5-abb4-f0aa2504d7f6) ram: 0MB disk : 0MB io_ops: 0 instances: 0 does not have 512 MB usable ram before overcommit, it only has 0 MB. host_passes /usr/lib/python2.7/site -packages/nova/scheduler/filters/ram_filter.py:46
 187378 2020-03-19 18:02:26.904 26 INFO nova.filters [req-944b91df-b606-4045-843d-4104482e2fb0 b824308b03d441d382a7227096f3fa89 78bc0643b1f84 cf280694331b6e39dcf - default default] Filter RamFilter returned 0 hosts
 187379 2020-03-19 18:02:26.905 26 DEBUG nova.filters [req-944b91df-b606-4045-843d-4104482e2fb0 b824308b03d441d382a7227096f3fa89 78bc0643b1f8 4cf280694331b6e39dcf - default default] Filtering removed all hosts for the request with instance ID '56eb5a9e-d57d-47e1-8a7b-a5e9d04 5c2d2'. Filter results: [('RetryFilter', [(u'openstack03-ironic', u'9c9bdc9a-eb1d-4ec5-abb4-f0aa2504d7f6')]), ('AvailabilityZoneFilte r', [(u'openstack03-ironic', u'9c9bdc9a-eb1d-4ec5-abb4-f0aa2504d7f6')]), ('RamFilter', None)] get_filtered_objects /usr/lib/python2.7 /site-packages/nova/filters.py:129
 187380 2020-03-19 18:02:26.905 26 INFO nova.filters [req-944b91df-b606-4045-843d-4104482e2fb0 b824308b03d441d382a7227096f3fa89 78bc0643b1f84 cf280694331b6e39dcf - default default] Filtering removed all hosts for the request with instance ID '56eb5a9e-d57d-47e1-8a7b-a5e9d045 c2d2'. Filter results: ['RetryFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'RamFilter: (start: 1, end: 0)']

Revision history for this message
zhangss (intentc) wrote :

(virtualenv) [root@openstack01 ~]# openstack compute service list
+-----+------------------+--------------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+-----+------------------+--------------------+----------+---------+-------+----------------------------+
| 10 | nova-conductor | openstack02 | internal | enabled | up | 2020-03-19T10:17:04.000000 |
| 28 | nova-conductor | openstack01 | internal | enabled | up | 2020-03-19T10:17:03.000000 |
| 34 | nova-conductor | openstack03 | internal | enabled | up | 2020-03-19T10:17:03.000000 |
| 46 | nova-scheduler | openstack02 | internal | enabled | up | 2020-03-19T10:17:03.000000 |
| 67 | nova-scheduler | openstack03 | internal | enabled | up | 2020-03-19T10:17:02.000000 |
| 88 | nova-scheduler | openstack01 | internal | enabled | up | 2020-03-19T10:17:01.000000 |
| 115 | nova-consoleauth | openstack02 | internal | enabled | up | 2020-03-19T10:17:02.000000 |
| 118 | nova-consoleauth | openstack01 | internal | enabled | up | 2020-03-19T10:16:56.000000 |
| 121 | nova-consoleauth | openstack03 | internal | enabled | up | 2020-03-19T10:17:02.000000 |
| 199 | nova-compute | openstack05 | nova | enabled | up | 2020-03-19T10:17:04.000000 |
| 202 | nova-compute | openstack04 | sr-iov | enabled | up | 2020-03-19T10:17:03.000000 |
| 208 | nova-compute | openstack01-ironic | nova | enabled | up | 2020-03-19T10:16:58.000000 |
| 211 | nova-compute | openstack02-ironic | nova | enabled | up | 2020-03-19T10:16:59.000000 |
| 214 | nova-compute | openstack03-ironic | nova | enabled | up | 2020-03-19T10:17:02.000000 |
+-----+------------------+--------------------+----------+---------+-------+----------------------------+
Bellow openstack03-ironic node is my openstack controller node(physical machine),if i want to use ironic service,i have to modify [ironic:children] and [nova-compute-ironic:children] field in inventory(multinode file)?
for example some new baremetal node that haven't no use node? beacuse i can't find some explain about these fileds.

Revision history for this message
Mark Goddard (mgoddard) wrote :

That service list looks fine. We change the hostname used by nova-compute to avoid a conflict with a virtualised nova-compute service on the same host.

The scheduler logs suggest that reserved_host_memory_mb has not been set to 0 in nova.conf for nova-compute-ironic. Can you check that?

Revision history for this message
zhangss (intentc) wrote :

(virtualenv) [root@openstack01 ~]# cat /etc/kolla/nova-compute-ironic/nova.conf | grep reserved_host_memory_mb
reserved_host_memory_mb = 0
[root@openstack02 ~]# cat /etc/kolla/nova-compute-ironic/nova.conf | grep reserved_host_memory_mb
reserved_host_memory_mb = 0
[root@openstack03 ~]# cat /etc/kolla/nova-compute-ironic/nova.conf | grep reserved_host_memory_mb
reserved_host_memory_mb = 0
I am certain reserved_host_memory_mb had been set to 0 and always been mapping to nova_compute_ironic docker container.
I running all components of ironic just like "ironic_neutron_agent nova_compute_ironic ironic_dnsmasq ironic_pxe ironic_inspector ironic_api ironic_conductor" on openstack's controller node,then through "openstack baremetal create" command to create one ironic node ...,this way can deploy one baremetal node?otherwise it's must be running bellow components on a new and real clean node waiting for management?

Revision history for this message
Mark Goddard (mgoddard) wrote :

You have it correct - no control plane services should run on the baremetal node.

I wonder where that 512MB RAM request is coming from then. Are you definitely using the my-baremetal-flavor flavor to create the node?

Revision history for this message
zhangss (intentc) wrote :

yes,I definitely using the my-baremetal-flavor flavor to create the bare node.
and,I think of the 512MB RAM request is create falvor before set zeroed values.
https://docs.openstack.org/kolla-ansible/stein/reference/bare-metal/ironic-guide.html#post-deployment-configuration
https://docs.openstack.org/ironic/stein/install/configure-nova-flavors.html

for example step
Create a baremetal flavor:

openstack flavor create --ram 512 --disk 1 --vcpus 1 my-baremetal-flavor
openstack flavor set my-baremetal-flavor --property \
  resources:CUSTOM_BAREMETAL_RESOURCE_CLASS=1

Revision history for this message
Mark Goddard (mgoddard) wrote :

From your earlier message it looks like the flavor has more ram though:

 openstack flavor list
+--------------------------------------+---------------------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------------------+-------+------+-----------+-------+-----------+
| | | | | | | |
| e5a95e46-c9b7-432f-b376-ec31c7f0eebc | my-baremetal-flavor | 49152 | 477 | 0 | 16 | True |
+--------------------------------------+---------------------+-------+----

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.