I have just attempted to upgrade again.
rune@hilbert:~/data$ bzr upgrade
starting upgrade of file:///home/rune/data/
making backup of file:///home/rune/data/.bzr
to file:///home/rune/data/backup.bzr
starting repository conversion
Doing on-the-fly conversion from <RepositoryFormatKnitPack1> to <RepositoryFormat2a>.
This may take some time. Upgrade the repositories to the same format for better performance.
Just before the segmentation fault I collected some information on memory consumption.
rune@host:~/data$ free -m
total used free shared buffers cached
Mem: 7874 7826 47 0 0 96
-/+ buffers/cache: 7729 144
Swap: 8918 6129 2789
From top:
PID USER PR NI VIRT RES SHR S
14714 rune 20 0 13.5g 7.1g 504 D
%CPU %MEM TIME+ COMMAND
2 92.3 105:00.67 bzr
So it is using swap heavily, resulting in a lot of waiting time for the cpu.
Earlier in the process it peaked at 4 different times with 10GB usage and then a drop.
Few computers have more than 8GB ram.
Of course, few also has such repository sizes to work on.
But nevertheless, I think you should work a bit more on having algorithms that only need to work on fragments of the data instead (if the problem is because it load everything into memory)
I have just attempted to upgrade again. ~/data$ bzr upgrade /home/rune/ data/ /home/rune/ data/.bzr /home/rune/ data/backup. bzr atKnitPack1> to <RepositoryForm at2a>.
rune@hilbert:
starting upgrade of file://
making backup of file://
to file://
starting repository conversion
Doing on-the-fly conversion from <RepositoryForm
This may take some time. Upgrade the repositories to the same format for better performance.
Segmentation faultTransferring revisions:repacking texts:texts 63900/88212
Just before the segmentation fault I collected some information on memory consumption.
rune@host:~/data$ free -m
total used free shared buffers cached
Mem: 7874 7826 47 0 0 96
-/+ buffers/cache: 7729 144
Swap: 8918 6129 2789
From top:
PID USER PR NI VIRT RES SHR S
14714 rune 20 0 13.5g 7.1g 504 D
%CPU %MEM TIME+ COMMAND
2 92.3 105:00.67 bzr
So it is using swap heavily, resulting in a lot of waiting time for the cpu.
Earlier in the process it peaked at 4 different times with 10GB usage and then a drop.
Few computers have more than 8GB ram.
Of course, few also has such repository sizes to work on.
But nevertheless, I think you should work a bit more on having algorithms that only need to work on fragments of the data instead (if the problem is because it load everything into memory)