Comment 3 for bug 721483

Facundo Batista (facundo) wrote :

Attached is a zipped file with two graphs I made as a result of a memory analysis of the Ubuntu One client, queuing almost a quarter million of commands.

The first graph is the client memory shown by top, annotated with what I did in the test:

- throw files inside Ubuntu One (the number of files with the 'f' at the end, note that for every file, two commands are queued).

- disconnect and connect

- u1sdtool --waiting

Now see the graph called sdmem_timed.png

Some conclusions:

- We don't have a measurable memory impact when disconnecting/connecting; I explicitly did that test because we had an issue regarding this in the past, but I made fixes in AQ with this in mind.

- Memory growth seems to be linear.

- The '--waiting' takes a lot of memory. Of course, then releases it, but leaves holes in memory that may be used later (see that when I added the last bunch of files, it didn't start to use more memory at that moment), or may not (memory fragmentation). For an alternative to this, check bug #754050.

The second graph is much simpler, just the quantity of queued operations, and the memory the client used (see sdmem_growths.png).

Yes, the memory usage is pretty much linear.

The "memory per operation" (including all data structures to make it happen) varies heavily, I guess because of --waiting memory usage, but it is around 5KB.

Despite all said, I'll do the minor memory optimization in AQ that I said above: assure that *all* commands have __slots__ declared.