Redis concurrency problems in API server

Bug #649807 reported by Soren Hansen
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Low
Unassigned

Bug Description

It seems that both our use of Redis and Carrot (or py-amqplib) are suffering somewhat under Eventlet. This bug is about Redis:

Trying something like this:
$ euca-describe-instances & euca-describe-instances & euca-describe-instances & euca-describe-instances &

Causes at least two (sometimes all) of these calls to fail with an HTTP 403 error from the API server. My analysis strongly suggested this to be due to multiple calls to Redis going through the same socket at the same time, making all of them fail.

Replacing the Redis singleton with a class that just returns a fresh Redis connection each time fixes it, but may not be acceptable (When doing 200 concurrent requests I got a /lot/ of Redis connections at the same time).

Soren Hansen (soren)
Changed in nova:
importance: Undecided → High
Revision history for this message
Vish Ishaya (vishvananda) wrote : Re: [Bug 649807] [NEW] Redis concurrency problems in API server

We are running into this occasionally, but I suspect the AuthManager singleton, since we aren't using redis anymore. I'll use your repro to see if i can stop the ldap version of the problem. Is someone working on an implementation of an AuthDriver that doesn't use redis?

On Sep 28, 2010, at 6:19 AM, Soren Hansen wrote:

> Public bug reported:
>
> It seems that both our use of Redis and Carrot (or py-amqplib) are
> suffering somewhat under Eventlet. This bug is about Redis:
>
> Trying something like this:
> $ euca-describe-instances & euca-describe-instances & euca-describe-instances & euca-describe-instances &
>
> Causes at least two (sometimes all) of these calls to fail with an HTTP
> 403 error from the API server. My analysis strongly suggested this to be
> due to multiple calls to Redis going through the same socket at the same
> time, making all of them fail.
>
> Replacing the Redis singleton with a class that just returns a fresh
> Redis connection each time fixes it, but may not be acceptable (When
> doing 200 concurrent requests I got a /lot/ of Redis connections at the
> same time).
>
> ** Affects: nova
> Importance: Undecided
> Status: New
>
> --
> Redis concurrency problems in API server
> https://bugs.launchpad.net/bugs/649807
> You received this bug notification because you are a member of Nova
> Bugs, which is subscribed to OpenStack Compute (nova).
>
> Status in OpenStack Compute (Nova): New
>
> Bug description:
> It seems that both our use of Redis and Carrot (or py-amqplib) are suffering somewhat under Eventlet. This bug is about Redis:
>
> Trying something like this:
> $ euca-describe-instances & euca-describe-instances & euca-describe-instances & euca-describe-instances &
>
> Causes at least two (sometimes all) of these calls to fail with an HTTP 403 error from the API server. My analysis strongly suggested this to be due to multiple calls to Redis going through the same socket at the same time, making all of them fail.
>
> Replacing the Redis singleton with a class that just returns a fresh Redis connection each time fixes it, but may not be acceptable (When doing 200 concurrent requests I got a /lot/ of Redis connections at the same time).
>
>

Revision history for this message
Eric Day (eday) wrote :

Like I mentioned in the other bug, we'll want to pool and reuse the Redis connections between each request.

Revision history for this message
Jay Pipes (jaypipes) wrote :

Not sure this should be a High priority, as Redis should be entirely removed sometime this week...so setting back to Low.

Changed in nova:
status: New → Confirmed
importance: High → Low
Revision history for this message
Eric Day (eday) wrote :

Redis is no longer used, marking as invalid.

Changed in nova:
status: Confirmed → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.