resize2fs: memory allocation failed while trying to resize

Bug #455024 reported by falstaff on 2009-10-18
30
This bug affects 6 people
Affects Status Importance Assigned to Milestone
e2fsprogs (Ubuntu)
Medium
Unassigned

Bug Description

Binary package hint: e2fsprogs

When I try to resize a big EXT3 filesystem on a RAID5 (/dev/md0) from 5TB to 7TB i get this output:

# resize2fs -p /dev/md0
resize2fs 1.41.9 (22-Aug-2009)
Resizing the filesystem on /dev/md0 to 1709329888 (4k) blocks.
Start von Durchgang 1 (max = 14904)
Vergrößere die Inode-TabelleXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
resize2fs: Memory allocation failed beim Versuch, die Größe von /dev/md0 zu ändern
Please run 'e2fsck -fy /dev/md0' to fix the filesystem
after the aborted resize operation.

Note: Im using Ubuntu Hardy due to its LTS. But I installed the packages from Karmic, because they had no special dependencies, I could install them without any problem. The message above is from this updated e2fsprogs version (note the version). I get the exactly same message from the original e2fsprogs, except the last two lines (Please run...).

I use 4KB Blocks, so EXT3 should be able to address 8TB...

falstaff (falstaff) wrote :

I could use ext2resize as a workaround. Its in a package called the same, and not from the extX developer. It resizes ext2/3 filesystems....

Changed in e2fsprogs (Ubuntu):
status: New → Confirmed
importance: Undecided → Medium
Enrico Zanolin (enrico-uninet) wrote :

I have the same problem resizing my array from 8 to 10 TB and I would not want to use the deprecated utility ext2resize as a workaround.

Here is my log

root@hive:~# resize2fs /dev/md1
resize2fs 1.41.11 (14-Mar-2010)
Resizing the filesystem on /dev/md1 to 2439379760 (4k) blocks.

resize2fs: Memory allocation failed while trying to resize /dev/md1
Please run 'e2fsck -fy /dev/md1' to fix the filesystem
after the aborted resize operation.
root@hive:~#
root@hive:~# e2fsck -f /dev/md1
e2fsck 1.41.11 (14-Mar-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md1: 106832/487882752 files (7.9% non-contiguous), 1161823952/1951503808 blocks
root@hive:~#

I checked top and I have plenty of memory free ( > 1 GB ) on the system before running resize2fs. I am running Ubuntu 10.04 LTS (Linux hive 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010 i686 GNU/Linux).

Any idea what the progress on this bug is as its 6 months old already and I would like to utilise my disks sometime soon.

Theodore Ts'o (tytso) wrote :

Exactly how much free memory do you have, and can you enable swap?

And are you using a 32-bit or 64-bit system?

I would try using a 64-bit system, and adding 8GB of swap space; I think it should work for you then.

elatllat (elatllat) wrote :

So 4 years later,
are we just not using debian/ubuntu for large file systems?
or are people using an ext alternative?

resize2fs -M /dev/mapper/crypt_left
resize2fs 1.42.5 (29-Jul-2012)
Resizing the filesystem on /dev/mapper/crypt_left to 1885633106 (4k) blocks.
resize2fs: Memory allocation failed while trying to resize /dev/mapper/crypt_left
Please run 'e2fsck -fy /dev/mapper/crypt_left' to fix the filesystem
after the aborted resize operation.

Theodore Ts'o (tytso) wrote :

No other file system supports shrinking the file system (whcih is what you are trying to do with resize2fs -M). And honestly, it's rare that people with very large file systems try to shrink file systems, which is why (a) most other file systems don't support it at all, and (b) it's not a super high priority for ext4 developers to try to track down.

If you are trying to grow the file system, that should work without problems (and in fact, given 1.42.5, I'd recommend using on-line resizing to grow file systems as oposed to off-line resizing, as there are a couple of bugs that weren't fixed until the very latest version of e2fsprogs).

I asked last time someone complained about this problem (1) how much memory they had, and (2) did they have swap enabled, and (3) was this a 32-bit or 64-bit system? If you have a 64-bit system, and you have plenty of swap enabled, and you are still running into the problem and are willing to work with us a bit to try to debug the problem (which may require compiling the latest version of e2fsprogs from the git tree with some debugging code inserted), please contact the linux-ext4 mailing list directly.

elatllat (elatllat) wrote :

Thanks for the quick answer tytso.

I used resize2fs to do an online grow of from 8TB to 12TB and that worked fine.
now I'm trying to offline shrink it back down to 8TB and it's unable to do so.

I am using a raspberrypi so it's 32-bit and limited to 256MB and I have 2x that in swap.
I am using debian not ubuntu but I'm guessing this is common code.

uname -a
Linux test 3.6.11+ #474 PREEMPT Thu Jun 13 17:14:42 BST 2013 armv6l GNU/Linux

(taken during a "e2fsck -fy ")
free
             total used free shared buffers cached
Mem: 237648 225440 12208 0 63240 11796
-/+ buffers/cache: 150404 87244
Swap: 475132 13444 461688

I was watching top and never saw it use more that 53% of the RAM but I may have missed the crucial moment.

I'll contact <email address hidden> as you suggest.

Theodore Ts'o (tytso) wrote :

256MB? Cough, choke. Yeah, I'm not at __all__ surprised that resize2fs required more memory than that.

In fact, there will be certain sorts of file system corruption or certain file system patterns (i.e., using lots and lots of hard links), where it's almost certain that 256MB, or even 512MB, won't be enough memory for e2fsck to repair a corrupted file system in some cases. (E2fsck was architected to trade memory usage for run time, on the assumption that most of the time, when you have a large file system, you also have adequate amounts of memory, and people would prefer that fsck run as quickly as possible, especially when during boot there is no other use for the memory. I've done some work so that e2fsck could work well on a system with only 4GB of memory, and two dozen 2TB disks each mounted as a separate file system, so the latest versions of e2fsprogs are a bit better about optimizing memory usage. But note that I'm talking about gigabytes of memory, not megabytes....)

With a 32-bit system, userspace is limited to 3GB of virtual address space, which is why I asked the question --- on x86 with PAE, you can have a 32-bit system with 32GB of memory, but a single process won't be able to use more than 3GB of that memory. But if this is a system with only 256MB of memory, that's not going to be the limit; the limiting factor will be your physical memory and your swap.

elatllat (elatllat) wrote :

An inode is only 256 bytes, the super block is only 1024 bytes, and I am likely missing something here but I see no reason to require more than a few inodes in RAM at any given time (and this seems to be the case for creating, growing, checking, and repairing). I would expect resize2fs to have a buffer of up to available RAM to speed things up, but only need a few MB in RAM for any operation. I would also expect that If there was some minimum amount of RAM required resize2fs would check and calculate before hand and tell the user how much RAM is required. maybe even warn when growing to a size resize2fs will not be able to shrink down from.

I guess e2fsprogs still has some room to improve.

Theodore Ts'o (tytso) wrote :

Resize2fs needs to keep track of which blocks need to be relocated, and which inode its associated with (so it can update the block reference after we copy the block from the portion of the file system that is going to be gone after we finish with the shrink operation).

I am sure that e2fsprogs still has room to improve. However, I'm not going to accept patches which make things worse for the most common configurations, which are typically (a) small amounts of memory, and small amounts of storage (i.e., a Android device with 512MB to 4GB of memory, and generally 8-32GB of flash; Linode or Amazon EC3 VM's will be roughly the same, but they will have much slower emulated HDD's instead of flash), or (b) an enterprise server with 8GB to 64GB with 4-12TB of disks. Patches which make things better for your configuration, but do not make things worse for these much more common configuration (and which don't break e2fsprogs's built in regression test suite) will be gratefully accepted.

Niklas Busch (j-niklas-x) wrote :

I ran into the same problem on my ARM 5 machine with 512 MB RAM (Debian squeeze).
It has a LVM system consisting of 4*3 TiB disks.

I needed to shrink an EXT4 file system from 8.5 TiB to something like 7 TiB, and kept getting the "Memory allocation failed" error. So, I tried to reduce it in smaller and smaller increments, but to no avail.

The basic system with the LVM off-line has a swap partition just under 1 GiB (1023.4 MiB according to gdisk).
While resize2fs was running, I was monitoring the mem/swap usage in a separate terminal (using free).
It never got close to filling the swap, but I thought "what the h*ll" and attached another disk with a huge SWAP partition, did swapon, and tried again. Now it worked. Again I monitored with free: It never used more than 1217 MiB of the SWAP.
Perhaps a test of requirements should be implemented (or at least a switch for it)? That way I wouldn't have to run e2fsck after each failed attempt.

Regarding the comment about most common configurations: perhaps you should rethink. I understand that you would not want to introduce something that would have bad consequences for enterprise configurations, but things are changing.
I could have moved all disks to my desktop machine with 16 GiB RAM and done it there, but it would have meant taking hardware apart. The reason the disks are attached to the little ARM box is that I want that storage on-line 24/7 and still be able to sleep at night (no fans).
I think this kind of set-up is becoming more common.

Just my 2 cents...

dino99 (9d9) wrote :

Should be very usefull to get asap the latest bugs fixed package 1.42.9-2 available on Debian Sid archive.

http://ftp-master.metadata.debian.org/changelogs//main/e/e2fsprogs/e2fsprogs_1.42.9-2_changelog

tags: added: precise saucy trusty
Theodore Ts'o (tytso) wrote :

One final note, before I mute this bug.

I don't consider it a bug that you can't shrink a file system with terabytes of disk space when you only have 256 or 512MB of memory. More to the point, I'm not paid to make e2fsprogs work well on that configuration. No other file system supports shirnking at all, and I doubt most file systems will able to handle this configuration well if they need to repair a badly corrupted file system when you only have this little memory (unless you enable swap and be prepared to wait for a long time).

If someone wants to try to make e2fsprogs work well for something that *I* consider to be an edge case (and yes, I have a bookshelf NAS box; it has Gigabytes worth of memory; funny, that), patches will be gratefully accepted. Send them to the linux-ext4 mailing list. Be warned, though, if it trashes performance on the vast majority of deployed use cases, I'm not going to accept the patch.

While I agree that Ubuntu should upgrade to 1.42.9, as it fixes a very large number of bugs, including some that could cause data corruption, it's not going to handle this particular feature request.

P.S. My cell phone has more memory that what some of the people on this list are proposing to use for a file server.....

elatllat (elatllat) wrote :
dino99 (9d9) wrote :

Thanks for the Trusty upgrade

tags: removed: trusty
Church (church4regsvc) wrote :

Googled up few related threads with solve mentioned adding swap. Even with 4G ram & 10G swap it didn't help me though for rather full 5.5T ext4 fs with big number of files. It seems that i could somewhat workaround it by specifying some middle fs size between one -M tried to reduce to and that of fs original size by calculating with bc #M_blocksize#*4/1024^2 and throwing few tenhs of gigs above that.

elatllat (elatllat) wrote :

e2fsck -y $D
fails with
"ext2fs_get_mem: Cannot allocate memory"
on a computer with 100GB swap and 2GB RAM.
#e2fsck 1.42.13 (17-May-2015)

e2fsprogs is clearly not designed to scale, guess it's time to switch to btrfs

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers