memoized will eventually fill up all of memory

Bug #1098047 reported by Joshua Harlow
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Dashboard (Horizon)
Fix Released
Wishlist
Radomir Dopieralski

Bug Description

It seems like certain functions have the memoized decorator.

If the web server is long-lived it would appear whatever is memoized never gets un-memoized, thus u eventually fill up all RAM and crash. That seems bad.

Revision history for this message
Gabriel Hurley (gabriel-hurley) wrote :

It is a possible problem. The memoized bits should eventually be replaced by an optional caching layer (memcached, etc.).

Changed in horizon:
importance: Undecided → Wishlist
milestone: none → grizzly-3
status: New → Confirmed
Changed in horizon:
milestone: grizzly-3 → none
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to horizon (master)

Fix proposed to branch: master
Review: https://review.openstack.org/54117

Changed in horizon:
assignee: nobody → Radomir Dopieralski (thesheep)
status: Confirmed → In Progress
Changed in horizon:
assignee: Radomir Dopieralski (thesheep) → nobody
status: In Progress → Confirmed
Changed in horizon:
assignee: nobody → Radomir Dopieralski (thesheep)
status: Confirmed → In Progress
Revision history for this message
Radomir Dopieralski (deshipu) wrote :

I proposed a change to the @memoized decorator that should reduce and limit the amount of memory used. The problem with the original @memoized (as well as with django.utils.functional.memoize and functools.lru_cache) is that the cache holds a reference to the original parameters, so they effectively stay in memory forever, even after all other references to them are gone. This is especially bad if you use @memoized on a method of an object -- one of the parameters is "self", and the object itself will never be released from the memory. Another example of that can be seen in horizon, which uses @memoized on functions that take "request" as a parameter -- thanks to that all requests will stay in memory until the process is restarted.

What I'm proposing is to use weakref in the cache, which prevents it from creating another reference to the parameters, and makes it remove the cache entries as soon as any of the parameters becomes unneeded. This way, the cache size is bound by the number of live objects. This can still be a large size, but at least it doesn't grow infinitely.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to horizon (master)

Reviewed: https://review.openstack.org/54117
Committed: http://github.com/openstack/horizon/commit/755509c169fdba2032c838697a874bd561835780
Submitter: Jenkins
Branch: master

commit 755509c169fdba2032c838697a874bd561835780
Author: Radomir Dopieralski <email address hidden>
Date: Mon Oct 28 13:56:54 2013 +0100

    Better @memoized decorator

    This version of the @memoized decorator only keeps weak references to
    the function arguments in cache, which means that it doesn't keep
    objects alive forever, and that it won't fill up all available memory
    eventually, as the old version would -- it only keeps cache for the
    objects that are actually in use.

    Honestly, I'm not actually sure that the effect is worth it, as the
    level of complexity of this code is quite high. I thought I would
    submit it anyways, so that we can at least have a discussion.

    Change-Id: I3a4fca3aba7c52102673f7fa5d127873c71516f8
    Partial-Bug: #1098047

Masco (masco)
Changed in horizon:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.