e2image reports invalid argument when used with big partitions

Bug #2063369 reported by Jaromír Cápík
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
e2fsprogs (Ubuntu)
New
Undecided
Unassigned

Bug Description

It seems e2image is unable to work correctly with large filesystems. I get the following error with etx4 on top of 48TB RAID5 array.

# e2image -pr /dev/md3 md3.e2i
e2image 1.46.5 (30-Dec-2021)
Scanning inodes...
Copying 802818 / 1547571 blocks (52%) 00:02:21 remaining at 20.50 MB/sseek_relative: Invalid argument

Revision history for this message
Theodore Ts'o (tytso) wrote :

If the file system has a limit on the size of the file (for example, ext4 has a maximum file size of 16TB), then when you do a relative seek via llseek(fd, offset, SEEK_CURR), the system call will return EINVAL. This is what is causing the error.

If you want to create a raw image of a very large file system, you'll either to write it to a block device which is sufficiently large, or you'll have to make sure the destination file system can support a file size of that particular length. (And, of course, has sufficient free space to hold the raw image in the first place.)

Another workaround is to compress the file, e.g., "e2image -pr /dev/md3 - | gzip > md3.e2i.gz"

Revision history for this message
Jaromír Cápík (tavvva) wrote :

Oh, sorry, I thought the limit cannot be reached that easy in case of sparse files full of unalocated blocks. Apparently the limit includes the unallocated space.

I retested that again and the sparse file really gets truncated to 16TB and I believe that's exactly the moment when the tool should report some error like: "Unable to create sparse file of that size! Unsupported by the target filesystem."

Instead of that the tool continues to extract data and after a long time it fails with Invalid argument.

To prevent confusion, would it be possible to modify the code to make it fail immediately when it fails to create the sparse file of the required size?

Thank you.

Revision history for this message
Theodore Ts'o (tytso) wrote :

Today, we don't create the sparse file "with the required size" when it is first opened. We could try seeking to maximum size of the device and writing a single byte, and seeing whether or not that fails, and if it fails, *assume* that this was caused by the target file system not supporting such a large file. There are other reasons why the write might fail (for example, the system administrator or some supervisor program might have set a file size limit via the setrlimit(3) systme call or the /etc/security/limits.h file.

Also note not there is no supported way for a userspace program to query the kernel to find the maximum file size supported by a particular file system. It will depend on the file system type, the file system block size, and potentially, file system features that might be enabled on a particular file system.

I agree that it would be nice to print a more user-friendly error message, and right away, as opposed after writing up to 16TB of image file. We just want to make sure if the system administrator has set a maximum file size of say, 2 GiB, and we print a message that it's a file system limitation, that will be confusing and will result in my getting angry bug reports. :-)

Revision history for this message
Jaromír Cápík (tavvva) wrote :

Well ... the tool could print all possible reasons of the failure to avoid confusion. Writing one byte at the last position seems to me a good solution. If that's acceptable, it would really improve the user experience. Thank you.

Revision history for this message
Jaromír Cápík (tavvva) wrote :

Not sure how the truncate tool works internally, but it does the same with the -s parameter.

$ truncate -s 48T size-test
truncate: failed to truncate 'size-test' at 52776558133248 bytes: File too large

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.