AggregateMultiTenancyIsolation description incorrect

Bug #1328400 reported by Jesse Pretorius
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
openstack-manuals
Fix Released
Medium
Lana

Bug Description

The AggregateMultiTenancyIsolation filter description in http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html#aggregate-multi-tenancy-isolation gives the impression that it isolates a host aggregate for a specific tenant or list of tenants. This is not exactly true.

What the filter does do is as follows:
 - it ensures that any instances created by the tenant (or list of tenants) are created only on that aggregate
 - it still allows any other tenant to also build instances on the aggregate

In effect, it only isolates the specific tenant/tenant list to the aggregate - but it doesn't isolate the aggregate from other tenants, which is what the current description implies.

Lingxian Kong (kong)
Changed in openstack-manuals:
assignee: nobody → Lingxian Kong (kong)
Revision history for this message
Lingxian Kong (kong) wrote :

hi, Jesse:
after doing some code walkthrough, I'm afraid the orginal description is right. Tenant can create instances on the host doesn't belong to any aggregate and the host with aggregate that has the metadata key 'filter_tenant_id'.

Did you do some testing on the really deployment?

Changed in openstack-manuals:
assignee: Lingxian Kong (kong) → nobody
Revision history for this message
Jesse Pretorius (jesse-pretorius) wrote :

Hi Lingxian - yes, I've verified this behaviour in both a testing and production environment.

Please see http://lists.openstack.org/pipermail/openstack-dev/2014-June/036990.html for a discussion on the topic and agreement from others.

Steps to reproduce:

1) In an environment with a single compute node, such as a devstack installation, create host aggregate and add the compute node to the host aggregate
2) Add the metadata key 'filter_tenant_id' with any arbitrary value (eg: 'thisisatest')
3) Using any project, build an instance
4) Observe that the instance is still built on the compute node, even though the aggregate has the metadata key 'filter_tenant_id'

This is the way that the filter was designed - it only isolates the tenant list in the metadata key 'filter_tenant_id' to the host aggregate, but still allows all other tenants to build instances on that host. The documentation implies that it is exclusive.

Revision history for this message
Lingxian Kong (kong) wrote :

Hi Jesse, thanks for explanation.

I have no environment available at the moment, but I wonder why the following code doesn't take effect:

        if metadata != {}:
            if tenant_id not in metadata["filter_tenant_id"]:
                LOG.debug("%s fails tenant id on aggregate", host_state)
                return False
        return True

Revision history for this message
Jesse Pretorius (jesse-pretorius) wrote :

Hmm, that does make sense - I may have my wires crossed a little.

The issue may be that if you allocate an aggregate to a tenant, the scheduler may still schedule the instance to build on any other host. I'll have to get a test environment up to confirm.

If that is the case, it may be prudent to make that clear in the documentation.

Revision history for this message
Lingxian Kong (kong) wrote :

HI jesse, I did a test on my new devstack, proved what my concern is right.

* I have two tenants, admin and demo
* I created an aggregate with the only one host, with the 'filter_tenant_id' = demo;
* I created a VM using admin credentials, the VM creation failed because of 'no valid host', below is the log:

2014-06-12 11:48:51.481 DEBUG nova.scheduler.filters.aggregate_multitenancy_isolation [req-ee6a294f-4da7-4456-be75-40b3701566e4 admin admin] Enter AggregateMultiTenancyIsolation metadata {u'filter_tenant_id': set([u'9dac3fc8aaae4f7b80e7c8155a512f0a'])},tenantid cf2d81356beb4c8390aadeb3432f6b3f host_passes /opt/stack/nova/nova/scheduler/filters/aggregate_multitenancy_isolation.py:46
2014-06-12 11:48:51.482 DEBUG nova.scheduler.filters.aggregate_multitenancy_isolation [req-ee6a294f-4da7-4456-be75-40b3701566e4 admin admin] (devstack, devstack) ram:6962 disk:27648 io_ops:0 instances:1 fails tenant id on aggregate host_passes /opt/stack/nova/nova/scheduler/filters/aggregate_multitenancy_isolation.py:49

Revision history for this message
Jesse Pretorius (jesse-pretorius) wrote :

Thanks - that confirms the mixup - sorry about that. You'll see that we're busy with https://review.openstack.org/#/c/99476/ which will add an option to do exclusive scheduling to the aggregate for a list of tenants. If we get that to land in Juno, we'll also submit the documentation patch to clarify what happens in each case.

Revision history for this message
Lingxian Kong (kong) wrote :

hi, Jesse, after going through your spec, I have understanded your need. A really usefull feature!

Changed in openstack-manuals:
status: New → Confirmed
importance: Undecided → Medium
Revision history for this message
Max Lobur (max-lobur) wrote :

I confirm since we hit this on multinode environment. We're using Havana, but the description for Havana is the same and still incorrect. AggregateMultiTenancyIsolation behaviour in Havana is :
1. If a host is in host aggregate which has filter_tenant_id - the host can only contain a VM's from that tenant, and not from any other tenant
2. If the tenant mentioned in filter_tenant_id attribute spawn a VM, it chooses from ALL the hosts, and it does not guarantees that a VM will we spawned exactly in hosts from that aggregate.

So currently it just isolates host from any tenant except ones mentioned in filter_tenant_id attribute. But it does not isolates tenant to hosts from that aggregate, tenant can still spawn everywhere, dependent of other filters.

Summer Long (slong-g)
Changed in openstack-manuals:
assignee: nobody → Summer Long (slong-g)
Revision history for this message
Tom Fifield (fifieldt) wrote :

Hi Summer,

Are you still intending to work on this one?

Revision history for this message
Summer Long (slong-g) wrote :

Argh, no time (will be gone for the next three weeks, should have booted this upstream back then. Just plain forgot.).

Quickly looked up what I did for RH; perhaps you can use it?

For host aggregate metadata (Admin > System > Host Aggregates):
filter_tenant_id: If specified, the aggregate only hosts this tenant (project). Depends on the AggregateMultiTenancyIsolation filter being set for the Compute scheduler.

In nova.conf, as a scheduling filter (as a value in scheduler_default_filters):
AggregateMultiTenancyIsolation: A host with the specified filter_tenant_id can only contain instances from that tenant (project). Note: The tenant can still place instances on other hosts.

Lana (loquacity)
Changed in openstack-manuals:
assignee: Summer Long (slong-g) → nobody
Lana (loquacity)
Changed in openstack-manuals:
assignee: nobody → Lana (loquacity)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-manuals (master)

Fix proposed to branch: master
Review: https://review.openstack.org/281620

Changed in openstack-manuals:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-manuals (master)

Reviewed: https://review.openstack.org/281620
Committed: https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=e3084a90a9e187bf63aecc3950c87702710b8518
Submitter: Jenkins
Branch: master

commit e3084a90a9e187bf63aecc3950c87702710b8518
Author: Lana Brindley <email address hidden>
Date: Thu Feb 18 13:40:07 2016 +1000

    Corrected description of AggregateMultiTenancyIsolation

    As per bug, the description required more specificity.

    Change-Id: Ic8039a71edf50a45e1c8e461780db861add48f2f
    Closes-Bug: #1328400

Changed in openstack-manuals:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-manuals 15.0.0

This issue was fixed in the openstack/openstack-manuals 15.0.0 release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.