Dedicated Host

Bug #1771523 reported by Tobias Rydberg
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Public Cloud WG
Confirmed
High
Tobias Rydberg

Bug Description

Providing a dedicated host for the user for better performance. Domain support.

"There were discussions around this feature, and one approved blueprint, but the code wasn't get merged. It is also unclear if using Host Aggregates and AggregateMultitenantIsolation scheduler filter could help solve it

(mriedem): One issue is the tenant string for that aggregate metadata is restricted to 255 (so you can't put many tenants in the aggregate); could we use a parent project id for nested projects to solve this? Although that likely depends on nova understanding hierarchical quota/domain stuff from Keystone (unified limits)?"

Reference specs
-----------------
"old approved nova spec at https://blueprints.launchpad.net/nova/+spec/whole-host-allocation, several attempts could be found: https://blueprints.launchpad.net/nova?searchtext=dedicated+host

Blazar provides a 'host reservation' feature. It may satisfy this requirement. <https://docs.openstack.org/blazar/latest/>"

Changed in openstack-publiccloud-wg:
importance: Undecided → High
Revision history for this message
Matt Riedemann (mriedem) wrote :

There is a new placement-related scheduler filter in Rocky that CERN is using to do multi-tenant isolation via host aggregates and resolves the aggregate metadata 255 limitation:

https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#tenant-isolation-with-placement

Revision history for this message
Matt Riedemann (mriedem) wrote :

Problem: the AggregateMultiTenancyIsolation filter makes it so that project A goes exactly on isolated hosts for that project, but it does not prevent the inverse, which is that project B-Z also can't land on that isolated host. See from the doc:

https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#aggregatemultitenancyisolation

"This setting does not isolate the aggregate from other tenants. Any other tenant can continue to build instances on the specified aggregate."

Revision history for this message
Matt Riedemann (mriedem) wrote :

What overlap exists with the Blazar project and it's concept of dedicated hosts?

https://docs.openstack.org/blazar/latest/cli/host-reservation.html

https://docs.openstack.org/blazar/latest/specs/pike/new-instance-reservation.html

At a high level, my understanding is that blazar has a nova scheduler filter which gets loaded and can link a server create request with a scheduler hint of the host reservation (lease ID in blazar?) to an isolated tenant host aggregate so that the server is created on that dedicated host.

What I'm unsure of with blazar is if there is a concept of infinite leases, i.e. I want a dedicated host on-demand with no expiration on the lease - I will give it up when I'm done using it.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Question about requirements:

* Should a single VM consume the entire dedicated host (like baremetal)?
* Or does the tenant have exclusive access to the dedicated host and can fit as many VMs as possible on it (like the AggregateMultitenantIsolation filter)?

The old design spec from HP public cloud sounded like the latter:

https://wiki.openstack.org/wiki/WholeHostAllocation

And is similar to how the Dedicated Host Service works in Huawei public cloud.

Revision history for this message
Tetsuro Nakamura (tetsuro0907) wrote :

Answers questions in the comment #3 from blazar team.
I also left some comments in a related spec https://review.openstack.org/#/c/593475 about how blazar works with aggregates.

> a server create request with a scheduler hint of the host reservation (lease ID in blazar?)

Well, to be accurate, not lease ID but reservation ID.
For example, 1 “lease”(start_time-end_time) can include 3 “reservation”s, each of which corresponds to 1 host occupancy.

> What I'm unsure of with blazar is if there is a concept of infinite leases

We don’t have such an “infinite” concept explicitly, but you can use a hack to create a lease with something like:

* start_time: "2018-09-01 21:00"
* end_time: “9999-12-31 23:59”

One can delete the lease when he/she doesn’t need it anymore. Deleting a lease during its lease time is supported.

Revision history for this message
Tobias Rydberg (tobberydberg) wrote :

Answers to questions in comment #4:

> * Should a single VM consume the entire dedicated host (like baremetal)?

No, that is not the use case from my point of view here.

> * Or does the tenant have exclusive access to the dedicated host and can fit as many VMs as possible on it (like the AggregateMultitenantIsolation filter)?

Yes. In "use case words" - the possibility to allow access to a specific compute node or group of compute nodes (host aggregates) to a single domain or project in a domain, or multiple projects (or domains) for that matter.

Changed in openstack-publiccloud-wg:
status: New → Confirmed
Revision history for this message
Matt Riedemann (mriedem) wrote :

We need actual functional testing of the AggregateMultiTenancyIsolation filter - because it's confusing in how it works. It looks like the problem with that filter is hosts in the tenant-isolated aggregates are restricted to those tenants, but the tenants themselves are not restricted to hosts in that tenant-isolated aggregate. So if the scheduler picks a few hosts where some are in and some are outside the aggregate and weighs the hosts outside the aggregate higher, the tenant will get those hosts rather than the isolated hosts.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Related nova patch to fix the docs on the AggregateMultiTenancyIsolation filter and adds a functional test to prove the behavior:

https://review.openstack.org/#/c/601835/

Revision history for this message
Matt Riedemann (mriedem) wrote :
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.