Comment 2 for bug 784181

Revision history for this message
xrg (xrg) wrote : Re: [Bug 784181] [NEW] unlimited search - seems to freeze - no possibility to interrupt

On Tuesday 17 May 2011, you wrote:
> Public bug reported:
>
> if unlimited is set and clear is clicked ALL records are returned. If
> the number is big the computer freezes for a while.
>
> it should be possible to interrupt.

I've seen that again.

Obviously, what you are trying to do is to return several thousands of records
in one RPC call. (gtk client does a read([< k ids>]) there)

Apart from the SQL load and pre-processing of that data, one issue is the
packing of values into xml [1][2]. That is slow, and needs a lot of memory.
Then, transferring the data will take some time, too.
Then, parsing the xml inside the client (Gtk, I assume, ?) needs time, too,
and putting all these records inside the model objects is memory+time hungry,
also.

Koo works around that, by fragmenting the dataset. Which is enough.

However, while working with WebDAV, I have seen that such protocols have a
method of _streaming_ big piles of data down the net, rather than using one
chunk. That would be a best-practice solution to copy into our protocols, too.

A temporary workaround for the gtk client, /feasible today/, would be to
detect that a read() has many ids, and break it into several calls. In the
meantime, it could also update a progress bar.

[1] really, are we talking about xml-rpc, or net-rpc?
[2] just saw your second mail, that matches my hypothesis