Comment 10 for bug 407834

Revision history for this message
Michael B. Trausch (mtrausch) wrote : Re: [Bug 407834] Re: fetching from pack to 2a format is slow

On Tue, 4 Aug 2009, Martin Pool wrote:

> So is the essence of this bug just that there should be a warning, or
> should/can we do something to make it faster?

Hrm. That I don't know; I'm not intimately familiar with bzr's internals.
I *can* say that I didn't expect to not finish it on my system; it's not
exactly a low-end system. Well, it wasn't when I bought it:

mbt@zest:~/Projects/UNIX/OpenSource/AllTray/alltray$ cat
/proc/cpuinfo|grep MH
cpu MHz : 2200.000
cpu MHz : 2200.000
cpu MHz : 2200.000
cpu MHz : 2200.000
mbt@zest:~/Projects/UNIX/OpenSource/AllTray/alltray$ cat /proc/meminfo
|grep MemTotal
MemTotal: 5868964 kB

I can note at least the following:

  * Branching it without landing it into a shared repository took only 30
minutes, which was fine given the large size of the project. Others might
complain about that. I believe that the Linux kernel is of comparable
size and depth, though I could be wrong. In any event, git seems to clone
that in around 10 minutes (and does saturate my connection in doing so);
branching MySQL never does).

  * When it was doing the conversion locally, it ran for several hours
(without completing; while that's not bzr's fault, I don't know if I'll
ever be able to complete it without replacing hardware or letting my
system stay in an unusable-at-the-console state for hours) using only one
core, and seemed to be CPU-bound as opposed to I/O bound.

  * The local conversion didn't appear to run (much) faster than
over-the-network, though I didn't formally measure that in any way. That
said, it wasn't network-bound in waiting over the network, and my drives
were barely active (this was done on an LVM logical volume that is striped
over two SATA 3.0 Gbps hard disks).

So, if the question is, "should it be faster," my (rather uneducated,
end-userish answer) is "yes, of course." But I am understanding of
technical limitations, and if the conversion process is happening as fast
as it can, then I can't really complain that much. See my post to the ML
for some other ideas; I am willing to collect data if it would help at
least be able to give other users a very rough idea of how long operations
like this may take. At the absolute least, I _do_ think that it should be
able to see when a branch is very large and a conversion is going to
happen, there should be _some_ indication that the user may be waiting for
a (very) long time, even on powerhouse systems.

  --- Mike