Playbook Runs Fail in Multi-Domain Environments

Bug #1614211 reported by Sean Carlisle
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Invalid
Medium
Nolan Brubaker
Trunk
Invalid
Medium
Nolan Brubaker

Bug Description

Playbook runs for any of the OpenStack services fail in Mitaka environments with multiple domains and the Keystone v3 sample policy in place found here:
https://github.com/openstack/keystone/blob/stable/mitaka/etc/policy.v3cloudsample.json

root@beans-13-1-3:/opt/openstack-ansible/playbooks# openstack-ansible os-keystone-install.yml
...
TASK: [os_keystone | Ensure service tenant] ***********************************
failed: [aio1_keystone_container-70aecfcd] => {"attempts": 5, "failed": true, "parsed": false}
Task failed as maximum retries was encountered

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
           to retry, use: --limit @/root/os-keystone-install.retry

aio1_keystone_container-70aecfcd : ok=83 changed=9 unreachable=0 failed=1

Steps to reproduce:
* Stand up an environment using openstack-ansible current stable/mitaka branch
* Add policy overrides for Keystone in /etc/openstack_deploy to mimic the v3 policy sample file
* Attempt to run any of the openstack service playbooks.

In my output above, this appears to be due to the playbooks authenticating with project scoping instead of domain scoping during the task:

https://github.com/openstack/openstack-ansible-os_keystone/blob/stable/mitaka/tasks/keystone_service_setup.yml#L68:

# Create a service tenant
- name: Ensure service tenant
  keystone:
    command: "ensure_tenant"
    login_user: "{{ keystone_admin_user_name }}"
    login_password: "{{ keystone_auth_admin_password }}"
    login_project_name: "{{ keystone_admin_tenant_name }}"
    endpoint: "{{ keystone_service_adminurl }}"
    tenant_name: "{{ keystone_service_tenant_name }}"
    description: "{{ keystone_service_description }}"
    insecure: "{{ keystone_service_adminuri_insecure }}"
  register: add_service
  until: add_service|success
  retries: 5
  delay: 10
  tags:
    - keystone-api-setup
    - keystone-setup

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

I believe https://review.openstack.org/#/c/309690/ adds the ability to scope the login the the domain, but it hasn't been backported to Mitaka.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-plugins (stable/mitaka)

Fix proposed to branch: stable/mitaka
Review: https://review.openstack.org/356711

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-plugins (stable/mitaka)

Reviewed: https://review.openstack.org/356711
Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-plugins/commit/?id=620c64e3160edb741bf853def67cf7d034396257
Submitter: Jenkins
Branch: stable/mitaka

commit 620c64e3160edb741bf853def67cf7d034396257
Author: Gabor Lekeny <email address hidden>
Date: Sat Apr 23 15:12:23 2016 +0200

    Add user and project login domains to keystone

    Added login_user_domain_name and login_project_domain_name parameters to
    keystone module.

    Closes-Bug: #1574000
    Partial-Bug: #1614211

    Change-Id: I29524ac9dad063c266122ecee09563531217974c
    Signed-off-by: Gabor Lekeny <email address hidden>
    (cherry picked from commit dce1b35de9076cd1a1a9bcdd812ab876b84a4830)

tags: added: in-stable-mitaka
Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

Sean, were you defining the Keystone domain(s) within an OSA variable to be registered with the 'Ensure service tenant' task? I don't think a variable exists for the domains currently, but wanted to double check with you to be sure I wasn't missing something.

Changed in openstack-ansible:
milestone: none → newton-rc1
Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

I'm able to replicate this behavior on an AIO with the patch above applied. I get the following error:

"You are not authorized to perform the requested action: identity:list_domains"

I'll do some more research to see if we need to modify the keystone library to do something different, or if the policy.json file might need to be adjusted.

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

To further add to this, I cannot view information about users in my keystone installation at this point, either:

root@aio1-utility-container-651bfa53:/# source ~/openrc
root@aio1-utility-container-651bfa53:/# openstack user show admin
Could not find resource admin
root@aio1-utility-container-651bfa53:/# openstack user list
You are not authorized to perform the requested action: identity:list_users (HTTP 403) (Request-ID: req-3ae0f26c-c161-4d55-8617-9eb0ead02e51)

And in my openrc file:

# Ansible managed: /etc/ansible/roles/openstack_openrc/templates/openrc.j2 modified on 2016-07-08 16:10:05 by root on nolan.brubaker-ord-nolan-aio2
export LC_ALL=C

# COMMON CINDER ENVS
export CINDER_ENDPOINT_TYPE=internalURL

# COMMON NOVA ENVS
export NOVA_ENDPOINT_TYPE=internalURL

# COMMON OPENSTACK ENVS
export OS_ENDPOINT_TYPE=internalURL
export OS_USERNAME=admin
export OS_PASSWORD=<removed>
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://172.29.236.100:5000/v3
export OS_NO_CACHE=1
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default

# For openstackclient
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_VERSION=3

Revision history for this message
Sean Carlisle (sean-carlisle) wrote :

Nolan,

I was not defining domains with OSA variables. Also, Keystone has different scoping. If you want to view users, projects and other domains, you have to be scoped at the domain level like so:

# Ansible managed: /etc/ansible/roles/openstack_openrc/templates/openrc.j2 modified on 2016-07-08 16:10:05 by root on nolan.brubaker-ord-nolan-aio2
export LC_ALL=C

# COMMON CINDER ENVS
export CINDER_ENDPOINT_TYPE=internalURL

# COMMON NOVA ENVS
export NOVA_ENDPOINT_TYPE=internalURL

# COMMON OPENSTACK ENVS
export OS_ENDPOINT_TYPE=internalURL
export OS_USERNAME=admin
export OS_PASSWORD=<removed>
export OS_PROJECT_NAME=
export OS_TENANT_NAME=
export OS_AUTH_URL=http://172.29.236.100:5000/v3
export OS_NO_CACHE=1
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_DOMAIN_NAME=Default

# For openstackclient
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_VERSION=3

Revision history for this message
Corey Wright (coreywright) wrote :

To provide background on what Sean said...

Keystone domain vs project scoped tokens: http://docs.openstack.org/admin-guide/keystone-tokens.html

To be more specific, when migrating to policy.v3cloudsample.json, as an example multi-domain policy, domain-scoped tokens become a requirement to operate at a domain-level (eg list domains).

The need might be fulfilled as easily as adding a configuration variable/override to allow a user to configure OSA to request domain-scoped tokens or might be as difficult as refactoring the affected OSA role(s) so as to even allow domain-scoped tokens (both of which examples I totally made up, so please forgive me if my work estimates are horribly off).

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

Thanks for that Corey. It does indeed look like we'll need to modify the services to request a domain-scoped token, and provide further support for it in the keystone library plugin. To request a domain-scoped variable, the library must not specify a project name when issuing the HTTP request, which the current code in Mitaka (and I think master, though I've not checked) require to be present.

My sense right now is that we'll need to do both things you've suggested - refactor the service playbooks to request domain-scoped tokens when some variable says to use domains instead of projects.

Revision history for this message
Jesse Pretorius (jesse-pretorius) wrote :

Removing the Mitaka series as the patch has not yet merged into master. The bug may be targeted to the Mitaka series once it's merged into master.

no longer affects: openstack-ansible/mitaka
Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

Currently testing this against master to see if it behaves the same there. If so, any patches will go to master first then get backported to mitaka.

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

My master builds are blocked by some bind mount bugs - I'll dig aronud the git logs and see if the code in master looks similar to the mitaka code that's been making this problematic.

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

@Sean (and maybe @Corey) - is the environment you're testing a liberty upgrade? I ask because I think a work around for greenfields may be to set up the environment, then apply the policy afterwards. If it's an upgrade, though, that won't work.

Revision history for this message
Corey Wright (coreywright) wrote :

@nolan

yes, a work-around, which is what we are doing in developing the policy and testing it, is deploying a mitaka all-in-one and then following up with the policies, but using very specific tags and skip-tags to avoid the problem.

for deployments you can see how reverting policy to make any changes only to reply policy afterwards (only as a work-around), would be cumbersome if even feasible.

corey

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

Yes, absolutely it's cumbersome and I'm not actually advocating it as a long term solution. I'm admittedly not a keystone expert, just trying to figure out what order to do things with this particular policy applied.

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

I'm digging into this today with someone from the keystone team to see if we can find out what's going on.

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

It appears that with Mitaka on later, the following overrides are necessary to set the default domain for the admin user:

keystone_keystone_conf_overrides:
    resource:
        admin_project_name: admin
        admin_project_domain_name: Default

Using these, I was able to get playbooks to proceed. Could you give that a try and report back?

Revision history for this message
Sean Carlisle (sean-carlisle) wrote :

Hey Nolan,

Its still failing for me. I added the overrides to user_variables.yml and they did make it to keystone.conf:

root@aio1-keystone-container-70aecfcd:/# grep admin_project_ /etc/keystone/keystone.conf
admin_project_name = admin
admin_project_domain_name = Default

However, the os-keystone-install.yml playbook still fails on "Ensure service tenant." Below is the verbose output of the playbook run:

<172.29.237.161> REMOTE_MODULE keystone login_project_name=admin login_password=VALUE_HIDDEN command=ensure_tenant insecure=False tenant_name=service login_user=admin description='Keystone Identity Service' endpoint=http://172.29.236.100:35357/v3

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

Is that just -v? A higher verbosity level should tell you the actual error code received. I was seeing 401 errors.

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

One other thing to try is editing http://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json#n3 and replacing 'domain_admin_id' with the actual value, 'Default'.

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

(Sorry for multiple replies, I keep thinking of things after having posted)

Would you be able to post the section of user_variables.yml you're using to override the policy? I was copying the sample json file directly out of the keystone tree and overwriting the existing template file to test; it's possible that's not a valid test.

Revision history for this message
Dolph Mathews (dolph) wrote :

The "domain_admin_id" in comment #20, which should read as "admin_domain_id", is actually a placeholder for a real value. However, the value suggested in comment #20, "Default" is the default domain's *name*, and not a domain *ID*. The default domain ID for the default domain is "default" (case matters depending on the database's configuration), and is actually configurable here:

  https://github.com/openstack/keystone/blob/e91c6f/etc/keystone.conf.sample#L882

(Sorry for how confusingly this reads, but I believe I got that all correct.)

Revision history for this message
Nolan Brubaker (nolan-brubaker) wrote :

Thanks for the clarification, Dolph!

Revision history for this message
Corey Wright (coreywright) wrote :

okay, so to add to the further madness/mess... ;)

yes, if you add:

    keystone_keystone_conf_overrides:
        resource:
            admin_project_name: admin
            admin_project_domain_name: Default

*and* you maintained the original keystone v3 sample policy of:

    "cloud_admin": "role:admin and (token.is_admin_project:True or domain_id:<insert domain id here>)",

then it should work as token.is_admin_project utilizes admin_project_name and admin_project_domain_name and is a backwards compatibility for users/processes that make requests with project-scoped tokens.

but as horizon needs a cached copy of keystone's policy and keystone + oslo.policy in mitaka can't handle token.is_admin_project [1][2], it had been removed so as to simplify the process of synchronizing horizon with keystone's policy.

so, to reiterate, using token.is_admin_project here is as a work-around to accommodate users/processes that can't request domain-scoped tokens.

[1] https://bugs.launchpad.net/horizon/+bug/1564851
[2] https://bugs.launchpad.net/oslo.policy/+bug/1547684

thanks for working this and tolerating our difficult configuration!

Revision history for this message
Jean-Philippe Evrard (jean-philippe-evrard) wrote :

Is there something to be done in OSA for this bug?
I didn't got the chance to read the full conversation, but I'd like to make sure everything is handled correctly.

Thank you in advance.

Revision history for this message
Sean Carlisle (sean-carlisle) wrote :

I believe we are good to go here. Re-adding token.is_admin_project:True as well as the extra block in user_variables.yml has corrected the problem.

Thanks!

Revision history for this message
Jesse Pretorius (jesse-pretorius) wrote :

Can we consider this bug closed in some way?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.