Memory Error when doing Merge

Bug #461992 reported by Drew Hintz
20
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Bazaar
Confirmed
High
Unassigned
Breezy
Triaged
Medium
Unassigned

Bug Description

While doing a bzr merge, I get the following output:

lightning-2:EMSFC00.006.000.000.002 andrewhintz$ bzr merge
Merging from remembered parent location bzr+ssh://drew@lipo/home/drew/working/
Python(34196) malloc: *** mmap(size=460984320) failed (error code=12)paring file merge 1354/2293
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Python(34196) malloc: *** mmap(size=1693966336) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
bzr: ERROR: exceptions.MemoryError:

Traceback (most recent call last):
  File "/Library/Python/2.5/site-packages/bzrlib/commands.py", line 727, in exception_to_return_code
    return the_callable(*args, **kwargs)
  File "/Library/Python/2.5/site-packages/bzrlib/commands.py", line 922, in run_bzr
    ret = run(*run_argv)
  File "/Library/Python/2.5/site-packages/bzrlib/commands.py", line 559, in run_argv_aliases
    return self.run(**all_cmd_args)
  File "/Library/Python/2.5/site-packages/bzrlib/plugins/qbzr/lib/commands.py", line 645, in run
    return bzrlib.builtins.cmd_merge.run(self, *args, **kw)
  File "/Library/Python/2.5/site-packages/bzrlib/builtins.py", line 3550, in run
    verified)
  File "/Library/Python/2.5/site-packages/bzrlib/builtins.py", line 3568, in _do_merge
    conflict_count = merger.do_merge()
  File "/Library/Python/2.5/site-packages/bzrlib/merge.py", line 493, in do_merge
    self._do_merge_to(merge)
  File "/Library/Python/2.5/site-packages/bzrlib/merge.py", line 465, in _do_merge_to
    merge.do_merge()
  File "/Library/Python/2.5/site-packages/bzrlib/merge.py", line 604, in do_merge
    self._compute_transform()
  File "/Library/Python/2.5/site-packages/bzrlib/merge.py", line 647, in _compute_transform
    file_status = self.merge_contents(file_id)
  File "/Library/Python/2.5/site-packages/bzrlib/merge.py", line 1165, in merge_contents
    self.text_merge(file_id, trans_id)
  File "/Library/Python/2.5/site-packages/bzrlib/merge.py", line 1195, in text_merge
    other_lines = self.get_lines(self.other_tree, file_id)
  File "/Library/Python/2.5/site-packages/bzrlib/merge.py", line 1182, in get_lines
    return tree.get_file(file_id).readlines()
  File "/Library/Python/2.5/site-packages/bzrlib/revisiontree.py", line 71, in get_file
    return StringIO(self.get_file_text(file_id))
  File "/Library/Python/2.5/site-packages/bzrlib/revisiontree.py", line 67, in get_file_text
    _, content = list(self.iter_files_bytes([(file_id, None)]))[0]
  File "/Library/Python/2.5/site-packages/bzrlib/revisiontree.py", line 80, in iter_files_bytes
    for result in self._repository.iter_files_bytes(repo_desired_files):
  File "/Library/Python/2.5/site-packages/bzrlib/repository.py", line 1939, in iter_files_bytes
    yield text_keys[record.key], record.get_bytes_as('chunked')
  File "/Library/Python/2.5/site-packages/bzrlib/knit.py", line 358, in get_bytes_as
    chunks = self._generator._get_one_work(self.key).text()
  File "/Library/Python/2.5/site-packages/bzrlib/knit.py", line 2031, in _get_one_work
    self._raw_record_map)
  File "/Library/Python/2.5/site-packages/bzrlib/knit.py", line 1207, in _raw_map_to_record_map
    content, digest = self._parse_record(key[-1], data)
  File "/Library/Python/2.5/site-packages/bzrlib/knit.py", line 1800, in _parse_record
    rec, record_contents = self._parse_record_unchecked(data)
  File "/Library/Python/2.5/site-packages/bzrlib/knit.py", line 1830, in _parse_record_unchecked
    (data, e.__class__.__name__, str(e)))
MemoryError

bzr 1.14.1 on python 2.5.1 (darwin)
arguments: ['/usr/local/bin/bzr', 'merge']
encoding: 'UTF-8', fsenc: 'utf-8', lang: 'en_US.UTF-8'
plugins:
  bzrtools /Library/Python/2.5/site-packages/bzrlib/plugins/bzrtools [1.14]
  email /Library/Python/2.5/site-packages/bzrlib/plugins/email [unknown]
  extmerge /Library/Python/2.5/site-packages/bzrlib/plugins/extmerge [unknown]
  launchpad /Library/Python/2.5/site-packages/bzrlib/plugins/launchpad [unknown]
  loom /Library/Python/2.5/site-packages/bzrlib/plugins/loom [1.4dev]
  netrc_credential_store /Library/Python/2.5/site-packages/bzrlib/plugins/netrc_credential_store [unknown]
  qbzr /Library/Python/2.5/site-packages/bzrlib/plugins/qbzr [0.9.9]
  rebase /Library/Python/2.5/site-packages/bzrlib/plugins/rebase [0.4.5dev]
  search /Library/Python/2.5/site-packages/bzrlib/plugins/search [1.7dev]
  svn /Library/Python/2.5/site-packages/bzrlib/plugins/svn [0.5.3]
*** Bazaar has encountered an internal error.
    Please report a bug at https://bugs.launchpad.net/bzr/+filebug
    including this traceback, and a description of what you
    were doing when the error occurred.
lightning-2:EMSFC00.006.000.000.002 andrewhintz$

I am merging with another branch (not the original repository).

Tags: memory
Revision history for this message
Jim Hodapp (jhodapp) wrote :

This bug also affects me on Windows XP SP3. It seems to happen when a very large repository with large binary files within it is acted on by bzr. Doing a bzr pull, bzr merge or bzr rebase all suffer from this memory exception for me.

Revision history for this message
Jim Hodapp (jhodapp) wrote :

I haven't heard anything else back about this bug. Do the developers need more info from myself or Drew Hintz on the issue to try and tackle this? This is a total blocker to using bzr and might cause us to move away from bzr as a result.

Changed in bzr:
importance: Undecided → Critical
status: New → Confirmed
Revision history for this message
Gordon Tyler (doxxx) wrote :

Have you tested using a newer version of bzr? As I understand it, a number of improvements in memory use have been made since bzr 1.14, especially in the 2.x series.

Revision history for this message
Gordon Tyler (doxxx) wrote :

Also, could you give some details on the size of the repository and the large binary files?

Revision history for this message
Jim Hodapp (jhodapp) wrote :

So from 1.14 to 2.0.2 seems to improve the situation on Mac OS X. But on Windows XP, the same problem persists. I'm trying this on an XP that is installed in VirtualBox that has about 700 MB of RAM dedicated to it.

du -hc /bzr/firmware seems to return a repository size of 1102961789 bytes.

The entire output of du is as follows:

1101401893 /bzr/firmware/.bzr/repository/packs
1376598 /bzr/firmware/.bzr/repository/indices
91528 /bzr/firmware/.bzr/repository/obsolete_packs
4096 /bzr/firmware/.bzr/repository/upload
4096 /bzr/firmware/.bzr/repository/lock
1102882700 /bzr/firmware/.bzr/repository
4096 /bzr/firmware/.bzr/branch-lock
1102891068 /bzr/firmware/.bzr
4096 /bzr/firmware/branches/BattConProtoMaster_demo/.bzr/branch-lock
4096 /bzr/firmware/branches/BattConProtoMaster_demo/.bzr/branch/lock
8400 /bzr/firmware/branches/BattConProtoMaster_demo/.bzr/branch
16768 /bzr/firmware/branches/BattConProtoMaster_demo/.bzr
20864 /bzr/firmware/branches/BattConProtoMaster_demo
4096 /bzr/firmware/branches/BattConProtoMaster/.bzr/branch-lock
4096 /bzr/firmware/branches/BattConProtoMaster/.bzr/branch/lock
8390 /bzr/firmware/branches/BattConProtoMaster/.bzr/branch
16758 /bzr/firmware/branches/BattConProtoMaster/.bzr
20854 /bzr/firmware/branches/BattConProtoMaster
45814 /bzr/firmware/branches
4096 /bzr/firmware/trunk/.bzr/branch-lock
4096 /bzr/firmware/trunk/.bzr/branch/lock
8347 /bzr/firmware/trunk/.bzr/branch
16715 /bzr/firmware/trunk/.bzr
20811 /bzr/firmware/trunk
1102961789 /bzr/firmware
1102961789 total

I'm not exactly sure how big of a file is in there, but I know it's very close to around 1 GB in size. This file should not be in this repository, but it was committed in error and thus is too late at this point.

Thanks for the response too by the way.

Revision history for this message
Jim Hodapp (jhodapp) wrote :

I also noticed while running it from within Mac OS X during a "bzr get" operation that the bzr binary used up to 2 GB of RAM at its peak. Why would bzr load this much data into memory at once and not do more of a stream-like operation?

Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 461992] Re: Memory Error when doing Merge

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jim Hodapp wrote:
> I also noticed while running it from within Mac OS X during a "bzr get"
> operation that the bzr binary used up to 2 GB of RAM at its peak. Why
> would bzr load this much data into memory at once and not do more of a
> stream-like operation?
>

Try bzr 2.1.0b3 which does quite a bit better about memory consumption
in general. (In my tests we are about 50% of what we used to be.)

There are a lot of details I could discuss, but at least it should be
better in the 2.1 branch.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkseq1YACgkQJdeBCYSNAAPLhACghMFU6VQIO7+69jCM6/cIQwb3
zosAoIxDsQsQjAJM/Q71ly4zVeFPsFzP
=3Ab2
-----END PGP SIGNATURE-----

Revision history for this message
Jim Hodapp (jhodapp) wrote :

John, I just tried bzr 2.1.0b3 on Windows and, although I believe it got further into doing a "bzr get" of this repository, it still hit the "out of memory" error and failed to get the branch. It did seem to use about 100 MB of less memory too versus a prior run with 2.0.2.

Revision history for this message
Jim Hodapp (jhodapp) wrote :

I also just tried doing a "bzr co --lightweight" to see if that'd make a difference. It did not, bzr still ran out of memory.

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jim Hodapp wrote:
> John, I just tried bzr 2.1.0b3 on Windows and, although I believe it got
> further into doing a "bzr get" of this repository, it still hit the "out
> of memory" error and failed to get the branch. It did seem to use about
> 100 MB of less memory too versus a prior run with 2.0.2.
>

So a hard-limit of 700MB is probably going to be too small for a large
fetch. We do stream a lot of the actual content, but the memory
consumption for the indexes can be non-trivial when you fetch lots of data.

Also, it would depend if you are running 32 vs 64 bit python. Pretty
much all things in python are either a long or a pointer, which means
that most data structures double in size when you switch to 64-bit (and
thus memory consumption almost doubles.)

Oh, and smart fetch versus dumb fetch will probably also see different
memory profiles. Smart fetch *should* split the memory requirements a
fair bit. The source will look at local indices, and just send content
bytes, the target will only look at the target indices, etc.

I'm sure there is still more that can be done to reduce peak memory, but
it is a bit tricky to mix this with efficient operation. Right now peak
memory generally happens while handling the CHK information, as we have
lots of those objects. It might be possible to split the fetch down a
bit more (say split-by-prefix).

Incremental fetches should generally have significantly lower overhead
then 'fetch everything from start'. So you could do:
 for i in `seq 1000 1000 $(bzr revno $OTHER)`; do bzr pull -r $i; done

Especially if the source is a smart server, you should be able to do
whatever you need with that arrangement.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAksetIAACgkQJdeBCYSNAAO3MQCghFmH+1T6NTBJdXHLUtFK7HZU
DR0Anim6xisW1ZCQptbyQGuKvnPWPqeZ
=OjXA
-----END PGP SIGNATURE-----

Andrew Bennetts (spiv)
tags: added: memory
Revision history for this message
Andrew Bennetts (spiv) wrote :

John: how much does the repo format matter to memory consumption? Given that the report started with bzr 1.14 I'm guessing this repo isn't using the latest format. (Although 1.1G is a pretty big repo even in 1.9 or pack-0.92 formats.)

Also, does this bug need to be Critical? I don't think we'd block a release on fixing this (although obviously it would be great to fix or at least improve this).

Vincent Ladeuil (vila)
Changed in bzr:
importance: Critical → High
Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Andrew Bennetts wrote:
> John: how much does the repo format matter to memory consumption? Given
> that the report started with bzr 1.14 I'm guessing this repo isn't using
> the latest format. (Although 1.1G is a pretty big repo even in 1.9 or
> pack-0.92 formats.)
>
> Also, does this bug need to be Critical? I don't think we'd block a
> release on fixing this (although obviously it would be great to fix or
> at least improve this).
>

I didn't mark it Critical, Patrick Regan did. I agree with the High
setting. I think Patrick was responding to Jim's comment that it
prevents them from using bzr.

Given the location of the failure, I'm guessing this is an issue that
they are extracting a very large file from a knitpack format repository.
Which has not been as memory tuned as the groupcompress code. Note that
both of them require at least 2x the size of the file. So if you have a
100MB file, we need at least 200MB of ram to extract it.

(A copy of the original, + a copy to update.)

2a format repos will generally be significantly lower than knitpack
(pack-0.92, 1.6, 1.9, 1.14) format repositories. Because they will work
on the raw bytes, rather than splitting things into lines (and
encounting a PyString 24 bytes per line + 4 bytes in a PyList.)

However, if the problem is during *fetch* that is a different issue.

Oh, and *merge* requires at least 3x the memory. Because we have to
unpack BASE + THIS + OTHER. And I'm guessing that means at least 4x,
since when unpacking one of those, we need the 2x memory to apply the
deltas.

So... I would expect this to get better with 2a...

Note that in knitpack, I think we are using iter_files_bytes, which
potentially unpacks many files at once. So it could be even higher. It
is a bit tricky to give specifics without having the actual content
available.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAksf4tEACgkQJdeBCYSNAAPSHwCeOjCXDqlOqGNbw0WtNTMZ4lZV
Qc0AoIi8/ZILVaGA+2ijki4tVPJD87uT
=4v4q
-----END PGP SIGNATURE-----

Revision history for this message
Patrick Regan (patrick-regan) wrote :

On 12/9/09, John A Meinel <email address hidden> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Andrew Bennetts wrote:
>> John: how much does the repo format matter to memory consumption? Given
>> that the report started with bzr 1.14 I'm guessing this repo isn't using
>> the latest format. (Although 1.1G is a pretty big repo even in 1.9 or
>> pack-0.92 formats.)
>>
>> Also, does this bug need to be Critical? I don't think we'd block a
>> release on fixing this (although obviously it would be great to fix or
>> at least improve this).
>>
>
> I didn't mark it Critical, Patrick Regan did. I agree with the High
> setting. I think Patrick was responding to Jim's comment that it
> prevents them from using bzr.
>

That was indeed what I was responding to. I should have tried to read
it more carefully and got more information before I marked it as such.
My apologies.

Pat

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Patrick Regan wrote:
> On 12/9/09, John A Meinel <email address hidden> wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Andrew Bennetts wrote:
>>> John: how much does the repo format matter to memory consumption? Given
>>> that the report started with bzr 1.14 I'm guessing this repo isn't using
>>> the latest format. (Although 1.1G is a pretty big repo even in 1.9 or
>>> pack-0.92 formats.)
>>>
>>> Also, does this bug need to be Critical? I don't think we'd block a
>>> release on fixing this (although obviously it would be great to fix or
>>> at least improve this).
>>>
>> I didn't mark it Critical, Patrick Regan did. I agree with the High
>> setting. I think Patrick was responding to Jim's comment that it
>> prevents them from using bzr.
>>
>
> That was indeed what I was responding to. I should have tried to read
> it more carefully and got more information before I marked it as such.
> My apologies.
>
> Pat
>

Just to clarify. Critical is "this must be fixed before we can release
the next bzr". It is reserved mostly for things which are regressions.
eg. if memory consumption tripled versus a previous bzr, then we would
consider that a regression that must be fixed.

This is fairly serious, but it may "already" be solved by upgrading to
the latest bzr with the latest repository format. (Though you probably
won't be able to do the upgrade with a 700MB VM.)

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAksf8uMACgkQJdeBCYSNAAMfKACgmjzB/2lBQNUCVdc6IfO0SYdN
Q/UAn25UocWMiQ2PhIBWFhxVP+miWssv
=bv3O
-----END PGP SIGNATURE-----

Revision history for this message
Jim Hodapp (jhodapp) wrote :

"This is fairly serious, but it may "already" be solved by upgrading to
the latest bzr with the latest repository format. (Though you probably
won't be able to do the upgrade with a 700MB VM.)"

This is correct. I tried upgrading the repository format to 2a today and even on a Linux box with 2 GB of memory, bzr got an "out of memory" exception. So it looks like we're stuck with the 1.14 version of the repository until there is a way around this. I tried this using bzr 2.0.2 from the official bzr PPA.

Revision history for this message
John A Meinel (jameinel) wrote :

If pure upgrade is OOMing, then one option is to:

bzr init-repo --2a target
cd target
bzr branch ../source/trunk trunk -r 100
cd trunk
bzr pull -r 200
for i in `seq 300 $(bzr revno ../../source/trunk) 100`; do bzr pull -r $i; done

And have it do an incremental conversion. You'll probably want to run:

bzr pack

When everything has finished.

Jelmer Vernooij (jelmer)
tags: added: check-for-breezy
Jelmer Vernooij (jelmer)
tags: removed: check-for-breezy
Jelmer Vernooij (jelmer)
Changed in brz:
status: New → Triaged
importance: Undecided → Medium
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.