Utah uses too much memory

Bug #1301124 reported by Paul Larson
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
UTAH
New
Undecided
Unassigned

Bug Description

Utah is being used to run tests like smem that are memory sensitive. We could exclude utah from showing up in the smem results, but that might not be enough. The fact that utah does consume a large amount of memory could cause other effects that are not good during these types of tests.

From Colin King:
My concern is that overall, UTAH is consuming quite a large proportion of overall memory, so this adds to the virtual memory "pressure" than can lead to dirty pages being flushed out earlier and unused pages being push out of physical memory. This leads to a skewing of the RSS and PSS stats.

Can anything be done to dramatically reduce the memory footprint of utah?

Revision history for this message
Paul Larson (pwlars) wrote :
Revision history for this message
Andy Doan (doanac) wrote :

I think something else may be going on here. As a baseline: I just did some profiling of UTAH locally. It takes about 18M just right at startup (ie before its done anything) - so I think this may be the basic cost for us to load our code and python libraries.

I looked here:

 http://ci.ubuntu.com/memory/idle/arch/amd64/machine/2/install/results/desktop/pss/

And you can see that our measurements have suddenly gone down from about ~300M total to ~70M. Even on the 300M runs, I see this one:

 http://ci.ubuntu.com/memory/idle/arch/amd64/machine/2/install/result/19128/details/

where Utah still used about 18M. There's not a direct link, but if you click on the utah python links in the URLs I've listed, you'll get a javascript pop-up with the UTAH memory history. It looks like its about 18M and this 76M was an outlier.

Not to put this on a tangent: but can anyone explain this massive total memory drop?

Revision history for this message
Paul Larson (pwlars) wrote :

One possibility is that it's because we have smem measuring PSS, which gives us not only the memory used by that running process itself, but also it's relative "proportion" of shared libs. On the low end, if there are lots of other things using the same shared libs then you could get a low PSS, but if not, then your process suddenly becomes responsible for most or all of it.

Revision history for this message
Colin Ian King (colin-king) wrote :

Indeed, the PSS is both useful and also can be misleading. The issue is that the only process running python will have a lot of pages accounted to it as the "last man standing" if other python processes exit.

Revision history for this message
Andy Doan (doanac) wrote :

plars - lets just launch a bunch of python scripts that do time.sleep(100000000000). Then maybe we can sneak our memory usage by cking :)

Revision history for this message
Colin Ian King (colin-king) wrote :

Nice idea!

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.