bash is not freeing memory of backticked output

Bug #82123 reported by Arnold J Noronha
18
This bug affects 2 people
Affects Status Importance Assigned to Milestone
bash (Ubuntu)
Invalid
Low
Unassigned

Bug Description

Binary package hint: bash

In feisty, when a command is called with backtick, there is apparently a memory leak which is fixed only once the correspong bash session is closed.

How to reproduce:
1. for i in `seq 1 100000` ; do true ; done
    in a bash session (even gnome-terminal would do)
    You can see the memory increasing, repeat this few times to observe that it is actually
    increasing and not getting freed.

On the other hand
  for i in $(seq 1 100000) ; do true; done

works fine.

My system is Feisty (upgraded from edgy, which was upgraded from dapper, from breezy)

--Arnold

Revision history for this message
Scott Severance (scott.severance) wrote :

I can confirm this on Edgy. If you increase the number of iterations, the result is more dramatic. In Edgy, it occurs for both syntaxes.

I noticed, however, that memory usage seems to peak. If you repeatedly run the loop, bashs memory footprint will eventually stop growing.

Changed in bash:
status: Unconfirmed → Confirmed
Changed in bash:
importance: Undecided → Low
Mika Fischer (zoop)
Changed in bash:
status: Confirmed → Triaged
Revision history for this message
Rolf Leggewie (r0lf) wrote :

There hasn't been any activity in this ticket for a while. Is this still a problem in Jaunty or Karmic?

Changed in bash (Ubuntu):
status: Triaged → Incomplete
assignee: nobody → Rolf Leggewie (r0lf)
Revision history for this message
Scott Severance (scott.severance) wrote :

Confirmed in Jaunty via the following command:

for j in `seq 1 10`; do echo "Loop $j"; for i in `seq 1 1000000`; do true ; done; done

Revision history for this message
Arnold J Noronha (arnold) wrote : Re: bash (feisty) is not freeing memory of backticked output

I can't reproduce in Jaunty with the command in #3.

memory use peaks to 110-120MB and remains constant thereafter.

summary: - bash (feisty) is not freeing memory of backticked output
+ bash is not freeing memory of backticked output
Revision history for this message
Rolf Leggewie (r0lf) wrote :

Arnold, Scott mentioned in comment 1 that memory consumption eventually peaks. Does the memory get freed after the loop ends or only once you close the bash session? What kind of memory consumption do you see for

for j in $(seq 1 10); do echo "Loop $j"; for i in $(seq 1 1000000); do true ; done; done

In karmic, both invocations give essentially the same result for me. memory consumption peaks at around 110-120MB. Memory isn't released in either case until the bash session is close.

Scott, the $() invocation seemed to work fine for you when you originally reported the issue. Can you please take a look at what is current status in this regard for you in Jaunty?

FWIW, memory consumption in dash peaks at around 15MB (and the loop finishes much faster). Memory is released once the loop ends.

Based on this information, I'm resetting back to triaged status, but ask Scott and Arnold to add the requested information.

BTW, has this ever been reported upstream?

Changed in bash (Ubuntu):
assignee: Rolf Leggewie (r0lf) → nobody
tags: added: jaunty karmic memoryleak
Changed in bash (Ubuntu):
status: Incomplete → Triaged
Revision history for this message
Rolf Leggewie (r0lf) wrote :

Scott, I must have mixed things up in my head. Your comment 1 already states that you were essentially seeing the effects that I was seeing. Just ignore my request for more infomration from you.

Revision history for this message
Scott Severance (scott.severance) wrote :

Rolf, I'm getting essentially the same results in Jaunty as you are in Karmic, with both syntaxes. And memory is only freed when bash exits.

Revision history for this message
Scott Severance (scott.severance) wrote :

Oops. I replied before I saw your last comment, comment 6.

Revision history for this message
Arnold J Noronha (arnold) wrote :

Hmm, I would've thought that "peaking" was okay. I'm not sure, and I'm just asking out of curiosty: if you malloc a 100MB and something else has been added to the heap later, it might be possible that the heap gets fragmented, and that might explain why it simply peaks. In my original bug report, I said:

      "You can see the memory increasing, repeat this few times to observe that it is actually
      increasing and not getting freed."

specifically to rule out this possibility.

Revision history for this message
Scott Severance (scott.severance) wrote :

I've had very little experience with manual memory management (my programming preferences tend to involve languages with automatic garbage collection), so I might be showing my ignorance here, but it seems to me that if you call malloc(), you should call free() when you're finished. If the memory footprint is growing--even if there's a peak somewhere--then some memory isn't getting freed, right? If that's the case, than it's a bug.

Revision history for this message
Rolf Leggewie (r0lf) wrote :

even if dash seems to be able to do this more efficiently and memory does not get reassigned until bash exits, I will close this now for the following reasons. As always, feel free to reopen if you have something to add or disagree and want some further discussion.

I talked extensively with some of the users and devs in #bash this morning. They were able to confirm the general findings in this ticket. They did some calculations which indicated to them that memory usage wasn't excessive or out of norm. I then ran valgrind to detect memory leaks (http://www.cprogramming.com/debugging/valgrind.html). The following caveats apply, I reduced the second loop by a factor of 100, since otherwise the machine would run out of memory. Furthermore, I ran this on a Debian, not a Ubuntu machine, since that was the only computer I had access to with 4G of physical RAM. The check indicated no memory leak.

$ valgrind --tool=memcheck --leak-check=yes /tmp/82123.sh | tee /tmp/valgrind.log
==17152== Memcheck, a memory error detector.
==17152== Copyright (C) 2002-2006, and GNU GPL'd, by Julian Seward et al.
==17152== Using LibVEX rev 1658, a library for dynamic binary translation.
==17152== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP.
==17152== Using valgrind-3.2.1-Debian, a dynamic binary instrumentation framework.
==17152== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et al.
==17152== For more details, rerun with: -v
==17152==
Loop 1
Loop 2
Loop 3
Loop 4
Loop 5
Loop 6
Loop 7
Loop 8
Loop 9
Loop 10
==17152==
==17152== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 8 from 1)
==17152== malloc/free: in use at exit: 0 bytes in 0 blocks.
==17152== malloc/free: 0 allocs, 0 frees, 0 bytes allocated.
==17152== For counts of detected errors, rerun with: -v
==17152== All heap blocks were freed -- no leaks are possible.

Thank you for reporting this issue.

Changed in bash (Ubuntu):
status: Triaged → Invalid
Changed in bash:
status: New → Invalid
Revision history for this message
George Pollard (porges) wrote :

Just ran into this. I think it should be considered a bug.

Bash is currently holding onto 4.8 GiB of my memory, and when I try to run less or top it just states "bash: fork: Cannot allocate memory".

Running a large for-loop should not make bash hold onto this much memory for its whole session.

Revision history for this message
George Pollard (porges) wrote :

Just ran into this. I think it should be considered a bug.

Bash is currently holding onto 4.8 GiB of my memory, and when I try to run less or top it just states "bash: fork: Cannot allocate memory".

Running a large for-loop should not make bash hold onto this much memory for its whole session.

Revision history for this message
George Pollard (porges) wrote :

Sorry, Launchpad said it wasn't able to post the comment...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.