Nova does not re-raise 401 Unauthorized received from Neutron for admin users

Bug #1657774 reported by Roman Podoliaka
38
This bug affects 6 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Triaged
Low
Unassigned

Bug Description

Description
===========

If a Keystone token issued for a admin user (e.g. ceilometer) is expired or revoked right after it's been validated by keystoneauthtoken_middleware in nova-api, but before it's validated by the very same middleware in neutron-server, nova-api will respond with 400 Bad Request instead of expected 401 Unauthorized, so that the original request can be properly retried after re-authentication.

Steps to reproduce
==================

The condition described above is easy to reproduce synthetically by putting breakpoints into Nova code and revoking a token. One can reproduce the very same problem in real life by running enough ceilometer polling agents.

Make sure you use credentials of an admin user (e.g. admin or ceilometer in Devstack) and have at least 1 instance running (so that `nova list` triggers an HTTP request to neutron-server).

1. Put a breakpoint on entering get_client() nova/network/neutronv2/api.py
2. Do `nova list`
3. Revoke the the issued token with `openstack token revoke $token` (you may also need to restart memcached to make sure token validation result is not cached)
4. Continue execution of nova-api

Expected result
===============

As token is now invalid (expired or revoked), it's expected that nova-api responds with 401 Unauthorized, so that a client can handle this, re-authenticate and retry the original request.

Actual result
=============

nova-api responds with 400 Bad Request and outputs the following error into logs

2017-01-19 15:02:09.952 595 ERROR nova.network.neutronv2.api [req-0c1558f5-9cc8-4411-9fb1-2fe7cb232725 admin admin] Neutron client was not able
 to generate a valid admin token, please verify Neutron admin credential located in nova.conf

Environment
===========

Devstack, master (Ocata), nova HEAD at da54487edad28c87accbf6439471e7341b52ff48

Changed in nova:
assignee: nobody → Roman Podoliaka (rpodolyaka)
tags: added: api neutron
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/422696

Changed in nova:
status: New → In Progress
Revision history for this message
Pavlo Shchelokovskyy (pshchelo) wrote :

example of a nasty traceback in n-cpu that I suspect is related to this bug

http://logs.openstack.org/08/440308/1/check/gate-grenade-dsvm-ironic-inspector-ubuntu-xenial/7274f12/logs/old/screen-n-cpu.txt.gz?#_2017-03-02_11_16_32_358

and which actually fails a deletion of VM (in this case it was a instance on Ironic node).

Revision history for this message
Matthew Edmonds (edmondsw) wrote :

I hit the same issue on stable/ocata. The "Neutron client was not able to generate a valid admin token, please verify Neutron admin credential located in nova.conf" error is intended to be output, and HTTP 400 raised, when the session auth is using the neutron auth parameters from nova.conf. Unfortunately this code is also being hit in one case where the session auth only has a token rather than the admin auth parameters from conf, making it unable to re-authenticate. That case is when nova.network.neutronv2.api.get_client has admin=False, context.is_admin=True, and context.auth_token is set. In that case, admin auth is not used [1] but the ClientWrapper is told that an admin context was used [2].

[1] https://github.com/openstack/nova/blob/stable/ocata/nova/network/neutronv2/api.py#L136
[2] https://github.com/openstack/nova/blob/stable/ocata/nova/network/neutronv2/api.py#L154

Changed in nova:
status: In Progress → Confirmed
Changed in nova:
status: Confirmed → In Progress
Changed in nova:
assignee: Roman Podoliaka (rpodolyaka) → Vladyslav Drok (vdrok)
Revision history for this message
Sean Dague (sdague) wrote :

Automatically discovered version ocata in description. If this is incorrect, please update the description to include 'nova version: ...'

tags: added: openstack-version.ocata
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.opendev.org/422696
Reason: This is in merge conflict and has been sitting with -1s for a long time. I'm not sure if it's still an issue and if it is, if this is still an OK fix, but it looks abandoned so I'm going to drop it.

Matt Riedemann (mriedem)
Changed in nova:
assignee: Vladyslav Drok (vdrok) → nobody
status: In Progress → Triaged
importance: Undecided → Low
Revision history for this message
Olaf Seibert (oseibert-sys11) wrote (last edit ):
Download full text (5.2 KiB)

Has this bug been fixed in some other way? I think I ran into it on Queens when I tried to do a "nova live-migration-force-complete <id>".

The result was a big mess, with the VM left in some half-migrated state, because this error stopped processing of the migration in its tracks. The VM was running on the target host, but nova thought it was still migrating. The volume attachments were duplicated: once for the target, once for the source. We needed to remove the old attachments and change the nova database directly to fix the state.

2021-09-06 11:20:55.932 48419 ERROR nova.volume.cinder [req-6f8d526d-4516-4162-9111-ea261eee460a 5073df8f6c9147aa8802953e07d8d825 6fa231b814d04f1e87b88548872333c9 - default default] Delete attachment failed for attachment 4b21f601-1bac-408f-8985-eaba051cc81a. Error: The request you have made requires authentication. (HTTP 401) Code: 401: Unauthorized: The request you have made requires authentication. (HTTP 401)
2021-09-06 11:20:55.934 48419 ERROR nova.compute.manager [req-6f8d526d-4516-4162-9111-ea261eee460a 5073df8f6c9147aa8802953e07d8d825 6fa231b814d04f1e87b88548872333c9 - default default] [instance: a2567a27-aa3f-4b16-8fd9-fc96d2427da3] Volume attachment 4b21f601-1bac-408f-8985-eaba051cc81a not deleted on source host dbl031623.dbl.sys11cloud.net during post_live_migration: The request you have made requires authentication. (HTTP 401): Unauthorized: The request you have made requires authentication. (HTTP 401)
2021-09-06 11:21:01.255 48419 ERROR nova.compute.manager [req-6f8d526d-4516-4162-9111-ea261eee460a 5073df8f6c9147aa8802953e07d8d825 6fa231b814d04f1e87b88548872333c9 - default default] [instance: a2567a27-aa3f-4b16-8fd9-fc96d2427da3] Post live migration at destination dbl021501.dbl.sys11cloud.net failed: NeutronAdminCredentialConfigurationInvalid_Remote: Networking client is experiencing an unauthorized exception.
Traceback (most recent call last):

  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 166, in _process_incoming
    res = self.dispatcher.dispatch(message)

  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
    return self._do_dispatch(endpoint, method, ctxt, args)

  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
    result = func(ctxt, **new_args)

  File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in wrapped
    function_name, call_dict, binary)

  File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()

  File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 67, in wrapped
    return f(self, context, *args, **kw)

  File "/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 1000, in decorated_function
    return function(self, context, *args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 216, in decorated_function
    kwargs['instance'], e, sys.exc_info())

  File "/usr/lib/python...

Read more...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.