VMs are going into error state when nova host aggregation is used to boot VMs
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Expired
|
Undecided
|
Unassigned |
Bug Description
VMs are going into error state when nova host aggregation is used to boot VMs.i have added host aggregation and added compute hosts to it.The meta data key-value is added in ext_specs of flavor.
Setup used: Devstack-master branch
In nova.conf, have the below config for filters
scheduler_
scheduler_driver = filter_scheduler
Steps:
1. Create an aggregate
stack@devstack:
+----+-
| Id | Name | Availability Zone | Hosts | Metadata |
+----+-
| 4 | test | - | | |
+----+-
2.Set a meta-data “family=intel” to the created aggregate
stack@devstack:
Metadata has been successfully updated for aggregate 4.
+----+-
| Id | Name | Availability Zone | Hosts | Metadata |
+----+-
| 4 | test | - | | 'family=intel' |
+----+-
3. Add a host(devstack) to this aggregate
stack@devstack:
Host devstack has been successfully added for aggregate 4
+----+-
| Id | Name | Availability Zone | Hosts | Metadata |
+----+-
| 4 | test | - | 'devstack' | 'family=intel' |
+----+-
4. Create a new flavor
stack@devstack:
+------
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+------
| 75a142d8-
+------
5.Create flavor key-value to match meta-data of the aggregate
stack@devstack:
stack@devstack:
+------
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+------
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 42 | m1.nano | 64 | 0 | 0 | | 1 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
| 75a142d8-
| 84 | m1.micro | 128 | 0 | 0 | | 1 | 1.0 | True |
| c1 | cirros256 | 256 | 0 | 0 | | 1 | 1.0 | True |
| d1 | ds512M | 512 | 5 | 0 | | 1 | 1.0 | True |
| d2 | ds1G | 1024 | 10 | 0 | | 1 | 1.0 | True |
| d3 | ds2G | 2048 | 10 | 0 | | 2 | 1.0 | True |
| d4 | ds4G | 4096 | 20 | 0 | | 4 | 1.0 | True |
+------
6.From the nova flavor-show, we can see the key-value is in “extra_specs” already
stack@devstack:
+------
| Property | Value |
+------
| OS-FLV-
| OS-FLV-
| disk | 10 |
| extra_specs | {"family": "intel"} |
| id | 75a142d8-
| name | testflavor1 |
| os-flavor-
| ram | 4096 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+------
stack@devstack:
7. Now boot a VM with the flavor
stack@devstack:
| Property | Value |
+------
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-STS:vm_state | building |
| OS-SRV-
| OS-SRV-
| accessIPv4 | |
| accessIPv6 | |
| adminPass | T8PaX8G9F6rF |
| config_drive | |
| created | 2016-08-
| description | - |
| flavor | testflavor1 (75a142d8-
| hostId | |
| host_status | |
| id | 238b7482-
| image | cirros-
| key_name | - |
| locked | False |
| metadata | {} |
| name | vm-test |
| os-extended-
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tags | [] |
| tenant_id | b26794c2d89748e
| updated | 2016-08-
| user_id | 2398670e99844bd
+------
stack@devstack:
+------
| Property | Value |
+------
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-STS:vm_state | error |
| OS-SRV-
| OS-SRV-
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2016-08-
| description | - |
| fault | {"message": "No valid host was found. There are not enough hosts available.", "code": 500, "details": " File \"/opt/
| | context, request_spec, filter_properties) |
| | File \"/opt/
| | hosts = self.scheduler_
| | File \"/opt/
| | return func(*args, **kwargs) |
| | File \"/opt/
| | return self.queryclien
| | File \"/opt/
| | return getattr(
| | File \"/opt/
| | return self.scheduler_
| | File \"/opt/
| | return cctxt.call(ctxt, 'select_
| | File \"/usr/
| | retry=self.retry) |
| | File \"/usr/
| | timeout=timeout, retry=retry) |
| | File \"/usr/
| | retry=retry) |
| | File \"/usr/
| | raise result |
| | ", "created": "2016-08-
| flavor | testflavor1 (75a142d8-
| hostId | |
| host_status | |
| id | 238b7482-
| image | cirros-
| key_name | - |
| locked | False |
| metadata | {} |
| name | vm-test |
| os-extended-
| status | ERROR |
| tags | [] |
| tenant_id | b26794c2d89748e
| updated | 2016-08-
| user_id | 2398670e99844bd
+------
stack@devstack:
Attached devstack logs
description: | updated |
description: | updated |
There are a number of reasons scheduling could fail to find a valid host and we need to see the n-sch.log and n-cpu.log to see the details of what's going on. If you can attach those to this bug, we can look into it further.