Comment 2 for bug 1632424

Revision history for this message
Steve Langasek (vorlon) wrote : Re: [Bug 1632424] Re: Use a more accurate directory size fudge factor

On Tue, Oct 11, 2016 at 05:58:19PM -0000, Barry Warsaw wrote:
> The reason why this is different than the other bug is because we know
> we *always* need some overhead for file system metadata, but we do not
> always need extra space above and beyond that.

> Here's a thought: what if we create the file system and use statvfs() to
> examine f_bfree/f_bavail. If those numbers are too high, bisect
> downward until we get a good fit. This would be slower but more
> efficient.

In almost all cases, we will be turning around and expanding the filesystem
again on first boot. I don't think there's much value in a slow,
build-the-filesystem-multiple-times approach to try to get an optimal size
for the filesystem, when that "optimal size" is only for minimizing the
number of zeroes we have to write to the disk - in most cases I expect the
mastering of the minimized filesystem would take longer than any time saved
on the side of the disk write. (Ok, 5 seconds saved on each disk image in
the factory for 20,000 devices is worth 20 minutes extra time preparing the
image, but it's still an ugly tradeoff I'd rather we not have to make.)

I think we should instead be trying to work out the necessary size by
drilling down into the actual details of what ext4 needs for metadata space.