Unable to delete domains when users was managed by LDAP back-end
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Identity (keystone) |
Fix Released
|
Medium
|
Colleen Murphy | ||
Pike |
Fix Committed
|
Undecided
|
Colleen Murphy | ||
Queens |
Fix Committed
|
Undecided
|
Colleen Murphy | ||
Rocky |
Fix Committed
|
Undecided
|
Colleen Murphy | ||
Stein |
Fix Released
|
Medium
|
Colleen Murphy |
Bug Description
Problem description:
We can create domains and assign users from LDAP back-end to the domains and create projects inside of the domains and also assign users out of LDAP access to the project. When we try to delete the domains that made use of the LDAP domain configuration to manage users, the domain deletion fails with a 500 server error.
Reproducing error:
Flow for creating domains:
1.) Create new domain
2.) Add domain configuration for LDAP config to domain
3.) Assign user out of LDAP access to the domain
4.) Create project inside of Domain and assign user access to the relevant project(s)
Flow for deleting domains:
1.) Revoke user access to the projects
2.) Remove all projects assigned in the Domain
3.) Revoke user access on domain level
4.) Remove the LDAP domain configuration for the domain
5.) Set domain status to disabled
6.) Remove domain (fails with server 500 error)
LDAP domain configuration added making use of the API call:
{
"config": {
"identity": {
},
"ldap": {
"url": "ldaps:
}
}
}
Keystone config:
[identity]
domain_
domain_
driver = sql
[ldap]
group_allow_create = False
group_allow_delete = False
group_allow_update = False
query_scope = sub
user_allow_create = False
user_allow_delete = False
user_allow_update = False
After some further investigation, it looks like remote LDAP users are then also created "locally" on keystone database in the user tables, which maps the users to the domains, and as a result when we remove the LDAP domain configuration it keeps the "locally mapped" users in the Keystone DB, which looks like Openstack performs a sanity check and checks if users are still assigned to domains, before deleting domains.
Keystone DB user table:
+------
| id | extra | enabled | default_project_id | created_at | last_active_at | domain_id |
+------
| 0ed65766eef5331
| 10adec00e356f1c
| 1b92cfe2d0add50
| 1d12c86bafc10dd
Stack trace:
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
2018-11-01 20:43:10.758 253 ERROR keystone.
Similar bugs have been reported, but they do not make use of external authentication for users, like LDAP. For example: https:/
Some more env variables:
Openstack versions affected: Pike and Queens
Keystone version on Pike:
{
"version": {
"status": "stable",
"updated": "2017-02-
{
}
],
"id": "v3.8",
"links": [
{
}
]
}
}
Changed in keystone: | |
status: | New → Triaged |
importance: | Undecided → Medium |
status: | Triaged → New |
This looks like an issue, I need to stand up a local keystone to confirm. Marking it as medium, Once I duplicate this, I'll be able to fix/address the issue. Where this is being raised out/up it is not straightforward. It is probably related to forignkey and lack of cascade deletes.
I expect this is a small error / simple fix but the traceback is fairly opaque and will need more digging.