keystone client is leaving hanging connections to the server

Bug #1282089 reported by mouadino
38
This bug affects 9 people
Affects Status Importance Assigned to Milestone
OpenStack Dashboard (Horizon)
Invalid
High
Unassigned
django-openstack-auth
Invalid
High
Unassigned
python-keystoneclient
Fix Released
High
Jamie Lennox

Bug Description

This is remarkable noticeable from Horizon which use keystoneclient to connect to the keystone server and at each request this later is left hanged there which consume the keystone server and at one point this will result to having keystone server process exceeding the limit of connection that is allowed to handle (ulimit of open filed).

## How to check:

If you have horizon installed so just keep using it normally (creating instances ....) while keeping an eye on the server number of opened files "lsof -p <keystone-pid>" you can see that the number increment pretty quickly.

To reproduce this bug very fast try launching 40 instances at the same time
for example using "Instance Count" field.

## Why:

This because keystone client doesn't reuse the http connection pool, so in a long running service (e.g. horizon) the effect will be a new connections created for each request no connection reuse.

Patch coming soon with more details.

Tags: keystone
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to python-keystoneclient (master)

Fix proposed to branch: master
Review: https://review.openstack.org/74720

Changed in python-keystoneclient:
assignee: nobody → mouadino (mouadino)
status: New → In Progress
Dolph Mathews (dolph)
Changed in python-keystoneclient:
importance: Undecided → High
milestone: none → 0.6.1
mouadino (mouadino)
description: updated
Dolph Mathews (dolph)
Changed in python-keystoneclient:
milestone: 0.6.1 → 0.7.0
Revision history for this message
Florent Flament (florentflament) wrote :

Hi,

I did some investigation, and there is only one session object used by each Keystone HTTPClient instance (superclass of v2_0.client.Client and v3.client.Client - https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/httpclient.py#L227). From my point of view, keystoneclient's behavior is right. The issue is when the application using python-keystoneclient instantiates many HTTPClient instances.

I can think of 2 possibles fixes:

* The application that uses python-keystoneclient uses as few HTTPClient instances as possible, which seems to be done in Horizon there: https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/keystone.py#L164 . Although, there may be a bug, since the number of TCP connexions is increasing with each HTTP request done towards the console.

* The HTTPClient instance provides a method, that closes every opened connexions and releases any used resource. This method can be called by the application using python-keystoneclient whenever the client will not be used anymore (for instance using a 'with .. as ... :' block).

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/81290

Changed in python-keystoneclient:
assignee: mouadino (mouadino) → Florent Flament (florent-flament-ext)
Revision history for this message
David Stanek (dstanek) wrote :

In looking into this a little deeper I found that keystoneclient was not behaving as I would have expected. In my mind this should release connections:

  >>> c = keystoneclient.client.Client(...)
  >>> del c

It appears that the client implementation uses a significant amount of circular references. This means that reference counting is not able to clean up the resources. Once the garbage collector runs these resources are cleaned up. This can be manually tested by running:

  >>> import gc
  >>> gc.collect()

Circular references: http://git.openstack.org/cgit/openstack/python-keystoneclient/tree/keystoneclient/v3/client.py#n95

Revision history for this message
Florent Flament (florentflament) wrote :

Mmh, that's interesting.

If I understand well, Python should automatically release connections when there's no more references towards them, by calling del. But because of circular references, this mechanism doesn't work. So, in addition to having hanging connexions, we may be in presence of memory leaks.

Ideally, we may try to track and remove circular references if possible. In the mean time, we can override the __del__() method of the appropriate classes to help Python free unused connexions.

The close() method I proposed isn't that useful if we go towards using the __del__ mechanism to automatically have Python close unused connexions.

Revision history for this message
mouadino (mouadino) wrote :

@Florent: AFAIK Python know how to collect circular references unless you specify a __del__ method in one of them (http://docs.python.org/2/library/gc.html#gc.garbage) the GC may have to do more complex strategies to free up circular references (Which is of course not ideal) comparing to normal reference counter but it should be able to do that in the end, so we don't have any memory leaks from this. So i don't think adding a __del__ method will fix anything in the opposite it will introduce memory leaks.

@Florent: /For your first comment)

I still think that keystone client is miss-using the requests.Session this class is not meant to be instantiated more than once for a single endpoint, beside this i think that it will be good to take advantage also of the ability of using a connection pool, which is given to us for free by python-requests, Now you may say that may it's python-requests fault to not be that flexible so that we can disable connection pooling (Which is the main reason why connection are kept open) if we don't want to change keystoneclient, but wouldn't it be just better to use actually the connection pooling ?

As for horizon usage of keystoneclient the caching that you pointed our here (https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/keystone.py#L164) is per request i.e. if a request is send from the browser to horizon this later will create a Request object and pass it to the target Django view and if the view must do more than one call to keystone, all of this calls should use the same keystoneclient.client.Client object (i.e. same token).

Revision history for this message
Florent Flament (florentflament) wrote :

I made some more testing.

@mouadino, Python's GC may be able to free unused resources referenced through circular references. The thing is that we may not want to wait until GC runs to close opened connexions.

For what I understand, the interpreter (without using the GC) frees objects, as soon as their number of incoming references falls to 0. Even when overriding __del__() method, del is recursively called on objects that don't have any reference.

Example:
>>> class A():
... def __del__(self):
... print "* deleting A"
...
>>> class B():
... def __init__(self):
... self.a = A()
... def __del__(self):
... print "* deleting B"
...
>>> def f():
... b = B()
...
>>> f()
* deleting B
* deleting A
>>>

On the other hand, if one creates circular references between objects, the interpreter can't cleanup objects that aren't used anymore, since the number of references towards these objects never falls to 0. According to what David Stanek said, when GC is called, these resources are eventually cleaned up.

Example:
>>> class A():
... def __init__(self, parent):
... self.parent = parent
... def __del__(self):
... print "* deleting A"
...
>>> class B():
... def __init__(self):
... self.a = A(self)
... def __del__(self):
... print "* deleting B"
...
>>> def f():
... b = B()
...
>>> f()
>>>

Therefore, unless we can fix every circular reference, we have to manually release resources that need to be, since the __del__ method isn't even called. For that purpose, the close() method allows us to do that explicitly.

@mouadino, FMPOV, the issue is not fully related to the use of connection pools. I think that we would have the same issue with bare opened files that are never closed. Besides, if an application only uses one keystoneclient at a time, the Session class should be instanciated only once at a time, unless the client isn't released - which is what happens. I don't think that we have to disable connection pools (they may be useful if the Session object is used by several clients, like keystone, nova, ...), but I do think that we have to release these resources properly when they are not used anymore.

As for Horizon, I agree with you, the keystoneclient is cached to be possibly used several times for a single request. A new keystoneclient (as well as any other client used) is instanciated each time a new request is made to Horizon.

In the end, I think we should explicitly close opened sessions after each request made to Horizon. Such implementation would consist in:
* adding the appropriate close() method to clients (hence patch https://review.openstack.org/81290 );
* adding a call to this method at some place in Horizon just before the response is sent back to the browser.

Revision history for this message
mouadino (mouadino) wrote :

> Therefore, unless we can fix every circular reference, we have to manually release resources that need to be, since the __del__
> method isn't even called. For that purpose, the close() method allows us to do that explicitly.

You don't have to run gc.collect() manually the GC will free them automagically (if there is no __del__ method b/c this break gc circular reference collector) when the object threshold that he track is reached and such, but yes like i said we should not relay on the GC that the reason of this bug report b/c in the end stuff fail :)

 > @mouadino, FMPOV, the issue is not fully related to the use of connection pools. I think that we would have the same issue with
 > bare opened files that are never closed. Besides, if an application only uses one keystoneclient at a time, the Session class should
 > be instanciated only once at a time, unless the client isn't released - which is what happens. I don't think that we have to disable
 > connection pools (they may be useful if the Session object is used by several clients, like keystone, nova, ...), but I do think that
 > we have to release these resources properly when they are not used anymore.

True but if there was no connection pooling Requests will be closing the socket for us (because they will not be used for pipeline), like i said it because it does connection pooling that's why it doesn't close the socket, and of course it does close socket if this one are part of the pool overflow (https://github.com/kennethreitz/requests/blob/master/requests/packages/urllib3/connectionpool.py#L245) or maybe we can just use HTTP 1.0 this way the server will close the request, which also another lost for us.

Revision history for this message
Florent Flament (florentflament) wrote :

I agree, the bug seems to be related to connection pooling ... When using raw requests, the connections are released as soon as the response object is deleted. With connection pools, even though the responses are deleted, the connections are kept open (for reuse) until the connection pool is deleted.

Example:
>>> import requests
>>> r = requests.get("http://www.yahoo.fr")
>>> # $ lsof -p 5847 | grep TCP | wc -l --> 2
...
>>> del r
>>> # $ lsof -p 5847 | grep TCP | wc -l --> 0
...
>>> s = requests.Session()
>>> r = s.get("http://www.yahoo.fr")
>>> # $ lsof -p 5847 | grep TCP | wc -l --> 2
...
>>> del r
>>> # $ lsof -p 5847 | grep TCP | wc -l --> 2

What I think should be the good fix:

* Ideally, a unique Session object should be instanciated by each Opensack Dashboard worker (for instance as a class attribute) and shared accross all clients (keystoneclient, novaclient, ...). Horizon needs such 'global' attributes to be able to share objects accross requests. Moreover, Response objects should be closed (with close() method) or used inside "with" blocks, to avoid relying on GC.

* A reasonable fallback in keystoneclient would be to have a the requests.Session instance stored as an attribute of class keystoneclient.session.Session if not provided to constructor. Something like this:

class Session(object):
    ...
    session = requests.Session()

    def __init__( ..., session=None, ...)
        ...
        if session:
            self.session = session

@mouadino I think this solution is quite close to your proposal, although doesn't used a global variable (it uses a class attribute instead, which is almost the same). It also allows sharing a unique connection pool accross instances of keystoneclient and Horizon's requests. Now I understand the rationale of your proposal.

Revision history for this message
Julie Pichon (jpichon) wrote :

Adding the Horizon project to the affected projects to increase the visibility of the bug for Horizon folks, and since there's talk on changing things on the horizon side too. Thanks!

Revision history for this message
Florent Flament (florentflament) wrote :

I read again Jamie Lennox's article about Sessions:
http://www.jamielennox.net/blog/2014/02/24/client-session-objects/

As well as Python requests module doc:
http://docs.python-requests.org/en/latest/user/advanced/

FWIU, keystoneclient.session.Session and requests.Session objetcs are
not meant to be shared between different users. Instances of these
classes store information related to a single user (for instance a
user's token). Therefore, we shouldn't have one unique Session
instance shared amongst all users. However, it would make sense to
have a global connection pool inside Horizon. Looks like the
requests.adapters.HTTPAdapater would be a good candidate for such http
connections pool.

Well, I think that's also what was saying Dean Troyer there (Feb 19):
https://review.openstack.org/#/c/74720/

IMHO, keystoneclient.session.Session cannot be mapped on a user
session properly by Horizon, since Horizon doesn't know when a user's
session is terminated (unless he explicitly clicks on the logout
button), and can't "close" the session. Therefore, with Horizon, we
can at best use one keystoneclient.session.Session per request. And
this session should be closed (to release the connections used during
this session) without relying on the GC, which apparently isn't
efficient enough to avoid the current bug.

From the tests I've been doing, I think that most connections leakage
come from django_openstack_auth module. python-keystoneclient is used
in Horizon too. The api/keystone.py module will need to be fixed too.

I've started to implement a connections pool in django_openstack_auth,
but it looks that python-keystoneclient doesn't like when Clients are
instanciated with an unauthenticated Session object as argument ->
Further investigating.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to django_openstack_auth (master)

Fix proposed to branch: master
Review: https://review.openstack.org/81990

Changed in django-openstack-auth:
assignee: nobody → Florent Flament (florent-flament-ext)
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to python-keystoneclient (master)

Fix proposed to branch: master
Review: https://review.openstack.org/82007

Changed in python-keystoneclient:
assignee: Florent Flament (florent-flament-ext) → Jamie Lennox (jamielennox)
Revision history for this message
mouadino (mouadino) wrote :

   > keystoneclient.session.Session and requests.Session objetcs are not meant to be shared between different users.

AFAIK That's true for keystoneclien.session.Session but not always true for requests.Session and in our case i think that is ok because we are not using cookies for authentication.

  > However, it would make sense to have a global connection pool inside Horizon

With that you will break the abstraction layers, basically FMPOV the layers should like this

-----------------------------------------------
horizon
-----------------------------------------------
keystone client (token, catalog ...)
-----------------------------------------------
transport (http)
-----------------------------------------------

Now you don't want horizon to know if keystone is over http or plain tcp or kerberos or what ever, that's not horizon problem, so if we are going to manage connection pools we should do that in keystone client or not do it at all.

The more i think about this problem the more i believe that this is mostly due to a miss design in the keystone client, (i already mention that in my review patch and i will repeat here), i think the keystone.session.Session is doing more than it support to do, basically i think this class should manage a user session and not deal with http transport too (SRP).

I think a better design (to really fix this problem) will be to add another class which should manage endpoint sessions (EndPointSession), this class is the one that should end up dealing with HTTP requests so the keystone.session.Session will be keystone.session.UserSession and we should move methods like ``request``, ``_send_request`` and so on to the EndPointSession, and the trick will be to have one EndPointSession instance per endpoint, this way we will have one requests.Session() per EndPointSession instance which mean one connection pool per endpoint.

In code:

class EndPointSession(object):

      __endpoints = {}

      def __new__(cls, url, ...):
             return self.__endpoints.setdefault(url, super().__new__(cls, url, ...))

     def __init__(self, url, ...):
            self.session = requests.Session()
            ....

     def request(self, method, ...):
            self.session.request(...)

class Session(object):

      def __init__(self, ...):
             self.session = EndPointSession(url, ...)

Cheers,

Revision history for this message
Jamie Lennox (jamielennox) wrote :

I'm happy to debate the scope of the session object, i know there has been a number of people raising concerns about the intermingling of the transport layer work with the user session management. I'm not adverse to splitting them up but i just haven't heard anything that would make this advantageous.

I'm not sure what your endpoint session will do in this case. when you use requests connection pools (requests.Session) you don't need to allocate one pool per endpoint. if i do:

   s = requests.Session()
   s.get('http://google.com')
   s.get('http://yahoo.com')
   s.get('http://openstack.org')

the connection pool object is smart enough to know that it can't send all that down the same connection and will open a new connection for it.

From what i can see all that EndPointSession above would be doing is making us do the management of host -> connection pool object.

Note that the keystoneclient.session.Session object takes a request.Session as a parameter, so if you want to use connection pooling in horizon all you need to do is create a request.Session() and pass that into keystoneclient. If the change over to using session directly causes too many problems (for now - we'll hopefully tackle this properly in Juno) then i can easily change that to have client.Client accept a requests_session parameter?

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/82258

Revision history for this message
Jamie Lennox (jamielennox) wrote :

Above review: https://review.openstack.org/82258 would allow horizon to pass a requests.Session object to keystoneclient if they wished to.

Revision history for this message
mouadino (mouadino) wrote :

   > the connection pool object is smart enough to know that it can't send all that down the
   > same connection and will open a new connection for it.

Right, i missed this one thanks, FWIW all happen here https://github.com/kennethreitz/requests/blob/v2.2.1/requests/packages/urllib3/poolmanager.py#L97, but it doesn't actually just create a new connection but a new pool for each end point, so my idea of EndPointSession to manage a connection pool per endpoint is already build-in in requests, which is pretty cool IMHO, so we can actually share a requests.Session between endpoints and users too (as far as i can tell), which is great, which beg the question again why we need more than one requests.Session in our case !?

  > Note that the keystoneclient.session.Session object takes a request.Session as a parameter,
  > so if you want to use connection pooling in horizon all you need to do is create a
  > request.Session() and pass that into keystoneclient.

I don't see how it's Horizon responsibility to tell the keystone client how to connect to the server, and FWIW i also don't agree with the fact that keystoneclient.session.Session accept a requests.Session argument, specially if this class is meant to be a high level class, you can't expect someone to add requests as dependency to their project if they want to set that argument and indirect dependencies are really not a good idea.

So i still think that connection pool are one of the best feature that python-requests offer and we should take advantage of it, not indirectly but as a builtin default option in keystoneclient (i don't know if it should be an option or no i.e. why someone will not want connection pool !?) .

Cheers,

Revision history for this message
Florent Flament (florentflament) wrote :

@mouadino, I agree that since we don't use cookies, there
shouldn't be any issue when sharing a requests.session.Session
between users. However, we should consider that the default pool
size is 10 HTTP and 10 HTTPS connections (
https://github.com/kennethreitz/requests/blob/master/requests/adapters.py#L31
) and needs a bit of a hack to be customized.

Then if I understand, we have two possible options:
* Horizon manages a connection pool, and shares it between OpenStack
  clients;
* OpenStack clients are responsible for managing their connection
  pool (or sharing one), which is hiden to Horizon.

From my POV, the two options are acceptable. However, unless I find a
good reason to opt for the first option, I think that the second
option makes life easier to write applications that use OpenStack
clients.

As both of you mentioned, the requests.Session object can deal with
several different urls. So if we go with hiding the connection pool to
Horizon, we can just go with a variant of mouadino's first patch.

As for the option of beign able to disable connection pooling, I kind
of agree with mouadino (unless someone provides me with a good counter
argument).

So it appears that there's an architectural choice to be made
there. In the mean time, Jamie's quick fix
https://review.openstack.org/#/c/82007/ looks like a good temporary
fix..

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to python-keystoneclient (master)

Reviewed: https://review.openstack.org/82007
Committed: https://git.openstack.org/cgit/openstack/python-keystoneclient/commit/?id=6d1f907061c85c6a4550379eab3c11cdcc45def4
Submitter: Jenkins
Branch: master

commit 6d1f907061c85c6a4550379eab3c11cdcc45def4
Author: Jamie Lennox <email address hidden>
Date: Fri Mar 21 16:59:09 2014 +1000

    Don't use a connection pool unless provided

    To prevent left over TCP connections from keystoneclient not correctly
    cleaning up we shouldn't use a connection pool. This is not ideal but it
    was a relatively new addition so shouldn't affect performance.

    When we are able to find a long term solution to keystoneclient's other
    problems we can move back to using a connection pool.

    Change-Id: I45678ef89b88eea90ea04de1e3170f584b51fd8f
    Closes-Bug: #1282089

Changed in python-keystoneclient:
status: In Progress → Fix Committed
Revision history for this message
Jamie Lennox (jamielennox) wrote :

From what i can see there is no reason that horizon would need more that one requests.Session(). In fact i think that horizon would probably be the largest winner from sharing a session object as the authentication information is added per request rather than by the requests.Session object.

The main problem is that the keystoneclient.Session object is not necessarily well named, it is analogous to the requests.Session object and that is where the name came from but it was designed in such a way that one keystoneclient.Session object is linked to one keystoneclient auth mechanism. Essentially the same as if you added a cookie or an auth object to a requests.Session then everything passing through that requests.Session object would contain that cookie, everything passing through that keystoneclient.Session object contains that keystone token.

This pattern we are planning to push out to other clients such that one session object is established and used by the keystoneclient, novaclient, glanceclient etc - so you handle auth once. This will be very useful to people who are spanning multiple clients in there work.

This is the reverse of the horizon case where you largely have multiple authentications that you want to send down the same connection. This is why the ability to pass in a requests.Session object was created. For horizon you will still need to have one keystoneclient.Session per authentication but if you create a single requests.Session object and pass that around you will get connection pooling that does not interfere with the authentication.

As for handling a requests.Session object - i see no problem with people being expected to do that. The primary use case where a keystoneclient.Session object is created and then shared amongst clients will mean that all the clients get connection pooling for free and the requests.Session object is managed for them. In the case you want to work outside of this pattern as Horizon does you do a little extra work to get that benefit. The dependency on request is already satisifed by all clients and i fall back to 'make the common case simple and the unusual case achievable'.

Note: that i have been experimenting with ways to handle multiple authentication plugins using the one keystoneclient.Session so that the likes of horizon/heat could only ever create one keystoneclient.Session object and many Auth Plugins. This will be dependant on the process of getting it into other clients and how much it is requested.

Revision history for this message
Kieran Spear (kspear) wrote :

@Jamie: Thanks for explaining. I've been quietly following this issue and things make a lot more sense to me now. Can we add this to the Keystoneclient docs? There's no mention of a 'session' there yet.

Revision history for this message
Jamie Lennox (jamielennox) wrote :

@Kieran: I do indeed need to get that into keystoneclient docs. There has been a lot of changes over there recently that will need to be explained including how the authentication plugins work.

I'm waiting on one more review: https://review.openstack.org/#/c/60752/ which i consider the last of the core session changes at which point we will be able to start pushing this towards other clients. When that is approved i will write the whole thing up for keystoneclient docs.

Dolph Mathews (dolph)
Changed in python-keystoneclient:
status: Fix Committed → Fix Released
Revision history for this message
Akihiro Motoki (amotoki) wrote :

It is worth tracked in Horizon Juno release. To improve the visibility I set the priority and the milestone target.

Changed in horizon:
importance: Undecided → High
milestone: none → juno-3
status: New → Confirmed
tags: added: keystone
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to python-keystoneclient (master)

Reviewed: https://review.openstack.org/108868
Committed: https://git.openstack.org/cgit/openstack/python-keystoneclient/commit/?id=8fcacdc7c74f5ac68e8e55ea8c15918c452411fe
Submitter: Jenkins
Branch: master

commit 8fcacdc7c74f5ac68e8e55ea8c15918c452411fe
Author: Jamie Lennox <email address hidden>
Date: Wed Jul 23 09:14:56 2014 +1000

    Move fake session to HTTPClient

    The fake session object is to prevent a cyclical dependency between
    HTTPClient and the session from leaving hanging session objects around.

    This is still necessary if you construct a client the old way however if
    you are using the session properly then there is no cyclical dependency
    and so we shouldn't prevent people using the connection pooling
    advantages of the session.

    Related-Bug: #1282089
    Change-Id: Ifca2c7ddd95a81af01ee43246ecc8e74abf95602

Thierry Carrez (ttx)
Changed in horizon:
milestone: juno-3 → juno-rc1
David Lyle (david-lyle)
Changed in horizon:
milestone: juno-rc1 → kilo-1
Changed in django-openstack-auth:
importance: Undecided → High
David Lyle (david-lyle)
Changed in horizon:
milestone: kilo-1 → kilo-2
Changed in django-openstack-auth:
assignee: Florent Flament (florentflament) → nobody
David Lyle (david-lyle)
Changed in horizon:
milestone: kilo-2 → kilo-3
Changed in django-openstack-auth:
assignee: nobody → Romain Hardouin (romain-hardouin)
Thierry Carrez (ttx)
Changed in horizon:
milestone: kilo-3 → kilo-rc1
David Lyle (david-lyle)
Changed in horizon:
milestone: kilo-rc1 → liberty-1
jun moon (z8715000)
Changed in horizon:
status: Confirmed → In Progress
status: In Progress → Confirmed
Changed in django-openstack-auth:
assignee: Romain Hardouin (romain-hardouin) → nobody
Changed in horizon:
milestone: liberty-1 → liberty-2
David Lyle (david-lyle)
Changed in django-openstack-auth:
status: In Progress → Confirmed
Changed in horizon:
milestone: liberty-2 → liberty-3
Revision history for this message
Neill Cox (neillc) wrote :

I spent some time looking at this bug today. I can't see any problems being caused when launching multiple instances. Is there work still required for horizon or d-o-a?

Thierry Carrez (ttx)
Changed in horizon:
milestone: liberty-3 → liberty-rc1
Revision history for this message
Matthias Runge (mrunge) wrote :

Neillc, the way we're using python-client libs tends not to re-use existing connections. (Nor python-*client do really support that).

This is an difficult issue, at least for novaclient, we tried to reduce the pain at some locations in the code.

David Lyle (david-lyle)
Changed in horizon:
milestone: liberty-rc1 → next
Revision history for this message
Doug Fish (drfish) wrote :

I spent some time trying to reproduce this. I don’t believe the problem is still occurring.

Here’s what I did:

I used devstack to set up an environment on an ubuntu image.
I edited /etc/apache2/sites-available/keystone.conf and changed processes=5 to processes=1 for both virtualhosts (to reduce the number of processes I needed to watch)
and restarted the apache service.

Use ps aux | grep keystone and noted the PIDs for processes named
(wsgi:keystone-pu -k start
and
(wsgi:keystone-ad -k start
after my keystone.conf http config edit there is only one each of these processes.

I opened two windows and monitored each process with a  loop like:
while true; do lsof -p <processid> | wc -l; sleep 2; done

Then I opened Horizon. I launched 10 instances, terminated them and launched 10 again. The output from my loops did not change at all during this time.

Changed in django-openstack-auth:
status: Confirmed → Invalid
Changed in horizon:
status: Confirmed → Invalid
Revision history for this message
Doug Fish (drfish) wrote :

I've marked as "invalid" since I'm unable to reproduce. Please update with new reproduction instructions if this problem remains.

Akihiro Motoki (amotoki)
Changed in horizon:
milestone: next → none
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.