Auth is sending user id : tenant id instead of token to cinder API

Bug #1147994 reported by bullardza@gmail.com
24
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Cinder
Invalid
Undecided
Unassigned

Bug Description

We are trying to boot instances from volume. The following is an example:

nova --debug boot --image d5e8df4c-ba83-43de-9322-045dc5ecee79 --flavor 2 --block_device_mapping vda=b3d302ed-e735-41fa-8c97-79fbf2e7d6c3 cirros-bfv-45

which fails and generates the below in /var/log/cinder/cinder-api.log
-----------------------------------------------------------------------------------
2013-03-05 17:04:05 21908 WARNING keystone.middleware.auth_token [-] Authorization failed for token 07aedb39c8e0472aad8f4a2024b87205:ac37039f5fba4874961fdacf672f2fef

2013-03-05 17:04:05 21908 WARNING keystone.middleware.auth_token [-] Authorization failed for token 07aedb39c8e0472aad8f4a2024b87205:ac37039f5fba4874961fdacf672f2fef

2013-03-05 17:04:05 21908 INFO keystone.middleware.auth_token [-] Invalid user token - rejecting request
--------------------------------------------------------------------------------------

Keystone shows neither as valid tokens when searching the db. What is interesting is that '07aedb39c8e0472aad8f4a2024b87205' matches our admin user. See below:

root@atmos27:~# keystone user-list
+----------------------------------+-----------+---------+-------------------------------+
| id | name | enabled | email |
+----------------------------------+-----------+---------+-------------------------------+
| 07aedb39c8e0472aad8f4a2024b87205 | admin | True | |
-------------------------------------------------------------------------------------------+

and 'ac37039f5fba4874961fdacf672f2fef' matches our demo tenant. See below:

root@atmos27:~# keystone tenant-list
+----------------------------------+---------+---------+
| id | name | enabled |
+----------------------------------+---------+---------+
| 001c735a4c5640b389a51fc51d1626f9 | service | True |
| ac37039f5fba4874961fdacf672f2fef | demo | True |
+----------------------------------+---------+---------+

We are using the following environmental variables:

export OS_TENANT_NAME=demo
export OS_USERNAME=admin
export OS_PASSWORD=PASSWORD
export OS_AUTH_URL="http://10.101.54.27:35357/v2.0"
export SERVICE_ENDPOINT=
export SERVICE_TOKEN=
export KEYSTONE_TOKEN_FORMAT=UUID

Ubuntu 12.10. fully updated. OpenStack Folsom from http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main.

Revision history for this message
mgmeskill@gmail.com (mgmeskill) wrote :

Is there any additional info that can be supplied to help with classification or analysis of this issue?

Even some troubleshooting direction / steps would be helpful.

Thanks!
 -Mike

Revision history for this message
bullardza@gmail.com (bullardza) wrote :

Just to add, I have not modified my policy.json's.

Vincent Hou (houshengbo)
Changed in cinder:
status: New → Incomplete
Revision history for this message
Vincent Hou (houshengbo) wrote :

A few things need to be verified.
Is keystone successfully running? what are the other data you entered for keystone db table? e.g. endpoint...
Have you tried command like keystone --os_username=... --os_password=...--os_tenant_name=...--os_auth_url=... token-get?
What is the configuration files like for keystone and cinder?

Revision history for this message
bullardza@gmail.com (bullardza) wrote :
Download full text (12.8 KiB)

Keystone is running. Instances launch fine when not doing boot from volume. I've taken instances and attached volumes to them, and the data is persistent. Just can't boot from them.

root@atmos27:/var/log# glance image-list
+--------------------------------------+----------------------------+-------------+------------------+-------------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+----------------------------+-------------+------------------+-------------+--------+
| 0bc94cb9-d334-4bb0-a40d-99cec4d12974 | Windows-2008-x86_64 | raw | ovf | 34359738368 | active |
| 56f7eb02-6a1c-41e6-bdbd-a9467b55a529 | Go-Linux-Snap | qcow2 | bare | 14090240 | active |
| 6abd20f5-c6d5-4d72-a498-6184440e58c5 | CirrOS - 0.3.0 - x86_64 v2 | raw | bare | 41126400 | active |
| 7618783a-1f6b-45f9-88fa-8bbabd40d9f0 | Windows_Snap | raw | ovf | 42949672960 | active |
| a2828ef0-f85f-47e6-98ac-1db77b271c4a | Cirros-mgm-test | raw | bare | 41126400 | active |
| c25b719d-1492-4f96-b71e-08f18d565c1e | cirros-qcow-snap | qcow2 | bare | 14221312 | active |
| d5e8df4c-ba83-43de-9322-045dc5ecee79 | cirros-0.3.0-x86_64 | qcow2 | bare | 9761280 | active |
| f91c7c9e-31b5-4401-b790-8ddf34f2cc2a | CirrOS - 0.3.0 - x86_64 | raw | bare | 41126400 | active |
+--------------------------------------+----------------------------+-------------+------------------+-------------+--------+

root@atmos27:/var/log# cinder list
+--------------------------------------+-----------+---------------------------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+---------------------------------+------+-------------+-------------+
| 2e6dfb45-18e7-4765-82e9-dc86947a1a27 | available | boot-from-vol-test-1-cirros | 12 | None | |
| 309eaf34-9c43-4195-9438-d15711996882 | available | cirros-qcow-vol-3 | 10 | None | |
| 48ff9347-5017-4175-88c9-65602ed3e582 | available | test | 1 | None | |
| 4b9ae5f4-144f-410b-8d7b-838209aa112b | available | boot-from-vol-test-2-cirros-raw | 12 | None | |
| 8e76244e-7660-41d1-a1d7-704237e5fb74 | creating | cirros-qcow-vol-2 | 10 | None | |
| 947bafad-88db-4f98-bbf1-f939394a02a5 | available | Windows | 32 | None | |
| 9a45d853-7a57-4744-8ced-283c67907bdf | available | boot-from-vol-test-4 | 12 | None | |
| b3d302ed-e735-41fa-8c97-79fbf2e7d6c3 | available | cirros-qcow-vol-1 | 11 | None | |
| c163063a-32b4-4173-ad57-92056eed35c1 | available | Cirros_64-bfv ...

Revision history for this message
Vincent Hou (houshengbo) wrote :

Hi bullardza, I am testing booting from the volume with the latest version of OpenStack, and file the bug
https://bugs.launchpad.net/cinder/+bug/1155512

For F version, copying image to volume and vice versa have not been added yet, so you need to attach an empty volume to a VM, access the VM, download the image, copy the image into the volume by commands, detach the volume and in the end boot from this volume. Is it how you did it? Please refer to
http://docs.openstack.org/folsom/openstack-compute/admin/content/boot-from-volume.html

Revision history for this message
bullardza@gmail.com (bullardza) wrote :

Yes. With the CIrros image and a Windows image we created. We've actually gotten it working with our images before, but ran into some other issues that caused us to rebuild the test environment with the latest release. I will go through the volume setup again to be sure I am following it to a T.

Aside from that, what would explain the token behavior? Regardless of the underlying volumes boot-ability, should it be sending User ID : Tenant ID instead of a real token? On other requests, the Cinder-api.log shows a real token...........nova boot and from horizon without volumes and no token problems......the deploy just fine.

Revision history for this message
Vincent Hou (houshengbo) wrote :

For your environment, I think it is enough to set the variables:
export OS_TENANT_NAME=demo
export OS_USERNAME=admin
export OS_PASSWORD=PASSWORD
export OS_AUTH_URL="http://10.101.54.27:35357/v2.0"
Make sure they are correct.

A auth_token needs to be set in the field "X-Auth-Token" of the header for each http request. However, it you do not have the token, a token can be generated for you if you provide your OS_USERNAME and OS_PASSWORD, and then set it into X-Auth-Token". OS_TENANT_NAME is optional. The code of client side will do the authentication automatically if you d not provide the token but provide username and password.

Revision history for this message
bullardza@gmail.com (bullardza) wrote :
Download full text (5.0 KiB)

I commented out everything except what you listed for my rc file that I am sourcing. Didn't seem to make a difference on my cinder problem.

I tried some token get and am able to get them fine.
_____________________________________________________
root@atmos27:~# keystone --debug --os-username=admin --os-password=PASSWORD --os-auth-url=http://10.101.54.27:35357/v2.0 token-get

connect: (10.101.54.27, 35357)
send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 10.101.54.27:35357\r\nContent-Length: 80\r\ncontent-type: application/json\r\naccept-encoding: gzip, deflate\r\nuser-agent: python-keystoneclient\r\n\r\n{"auth": {"passwordCredentials": {"username": "admin", "password": "PASSWORD"}}}'
reply: 'HTTP/1.1 200 OK\r\n'
header: Vary: X-Auth-Token
header: Content-Type: application/json
header: Content-Length: 244
header: Date: Tue, 19 Mar 2013 14:17:25 GMT
No handlers could be found for logger "keystoneclient.v2_0.client"
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| expires | 2013-03-20T14:17:25Z |
| id | b98af5515d194708907385a01c4089f3 |
| user_id | 07aedb39c8e0472aad8f4a2024b87205 |

______________________________________________________

So getting a token is working. So doing it with the cinder user......
______________________________________________________
root@atmos27:~# keystone --debug --os-username=cinder --os-password=PASSWORD --os-tenant-name=service --os-auth-url=http://10.101.54.27:35357/v2.0 token-get

connect: (10.101.54.27, 35357)
send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 10.101.54.27:35357\r\nContent-Length: 106\r\ncontent-type: application/json\r\naccept-encoding: gzip, deflate\r\nuser-agent: python-keystoneclient\r\n\r\n{"auth": {"tenantName": "service", "passwordCredentials": {"username": "cinder", "password": "PASSWORD"}}}'
reply: 'HTTP/1.1 200 OK\r\n'
header: Vary: X-Auth-Token
header: Content-Type: application/json
header: Content-Length: 2706
header: Date: Tue, 19 Mar 2013 14:19:21 GMT
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2013-03-20T14:19:21Z |
| id | cb607751951943d080a500cb1fef958e |
| tenant_id | 001c735a4c5640b389a51fc51d1626f9 |
| user_id | 98be9a494c1b41118a664b8b58e4af2f |
+-----------+----------------------------------+

__________________________________________________________

Token get with Admin in the demo tenant works. Works with Cinder user in Service tenant as well. So I am pretty sure my token get is working. Just want to get past this token part of booting from volumes.....when I nova boot with boot from volume, it fails and cinder-api logs this:
______________________________________________________

2013-03-19 09:09:33 6917 WARNING keystone.middleware.auth_token [-] Authorization failed for token 07aedb39c8e0472aad8f4a2024b87205:ac37039f5fba4874961fdacf672f2fef
2013-03-19 09:09:33 6917 INFO keystone.middleware.auth_token [-] Invalid user token - rejecting request
______________________________________________________

The first part of the faile...

Read more...

Revision history for this message
bullardza@gmail.com (bullardza) wrote :
Download full text (8.3 KiB)

I found this today when experimenting with boot from volume vs regular instances....I tailed /var/log/nova/nova-compute.log during a nova boot..............Instance failed block device setup who's timestamp is the same as the cinder-api.logs message about token rejection..........

2013-03-20 15:06:25 DEBUG nova.volume.cinder [req-a49f2cfc-6e35-498d-9b90-60df72b19902 07aedb39c8e0472aad8f4a2024b87205 ac37039f5fba4874961fdacf672f2fef] Cinderclient connection created using URL: http://10.101.54.27:8776/v1/ac37039f5fba4874961fdacf672f2fef cinderclient /usr/lib/python2.7/dist-packages/nova/volume/cinder.py: 68
2013-03-20 15:06:25 ERROR nova.compute.manager [req-a49f2cfc-6e35-498d-9b90-60df72b19902 07aedb39c8e0472aad8f4a2024b87205 ac37039f5fba4874961fdacf672f2fef] [instan ce: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] Instance failed block device setup
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] Traceback (most recent call last):
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] File "/usr/lib/python2.7/dist-packages/nova/compute/manager .py", line 729, in _prep_block_device
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] return self._setup_block_device_mapping(context, instance )
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] File "/usr/lib/python2.7/dist-packages/nova/compute/manager .py", line 447, in _setup_block_device_mapping
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] volume = self.volume_api.get(context, bdm['volume_id'])
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] File "/usr/lib/python2.7/dist-packages/nova/volume/cinder.p y", line 144, in get
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] item = cinderclient(context).volumes.get(volume_id)
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] File "/usr/lib/python2.7/dist-packages/cinderclient/v1/volu mes.py", line 147, in get
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] return self._get("/volumes/%s" % volume_id, "volume")
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] File "/usr/lib/python2.7/dist-packages/cinderclient/base.py ", line 141, in _get
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] resp, body = self.api.client.get(url)
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] File "/usr/lib/python2.7/dist-packages/cinderclient/client. py", line 138, in get
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2e0fe3a4ca13] return self._cs_request(url, 'GET', **kwargs)
2013-03-20 15:06:25 14470 TRACE nova.compute.manager [instance: 6d1e115b-6201-4ba4-a088-2...

Read more...

Revision history for this message
bullardza@gmail.com (bullardza) wrote :

Why wouldn't I also get the same token rejection when creating a volume?

Revision history for this message
bullardza@gmail.com (bullardza) wrote :

Is anybody else having this occur when doing boot from volume with Ceph on Openstack Folsom?

Revision history for this message
bullardza@gmail.com (bullardza) wrote :

Vincent, Is there any more output I can show you to get rolling on troubleshooting this?

Revision history for this message
Sean McGinnis (sean-mcginnis) wrote : Cleanup

Closing stale bug. If this is still an issue please reopen.

Changed in cinder:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.