Use a more accurate directory size fudge factor

Bug #1632424 reported by Barry Warsaw
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ubuntu Image
New
Low
Unassigned

Bug Description

Unlike LP: #1632085 this bug specifically addresses the 1.5x fudge factor for file system overhead that we apply after calculating the size of the directory to copy into the image. The previous bug was needed to handle the cases of kvm images, which won't resize their file systems automatically.

This bug is related to the file system overhead which is difficult to calculate accurately. E.g. when we copy a directory of files and subdirectories to an ext4 file system, what's the overhead of that file system? Right now we just add 50% more space, but that could be quite wasteful.

I don't know of a more accurate way to calculate this overhead though.

Revision history for this message
Barry Warsaw (barry) wrote :

The reason why this is different than the other bug is because we know we *always* need some overhead for file system metadata, but we do not always need extra space above and beyond that.

Here's a thought: what if we create the file system and use statvfs() to examine f_bfree/f_bavail. If those numbers are too high, bisect downward until we get a good fit. This would be slower but more efficient.

Revision history for this message
Steve Langasek (vorlon) wrote : Re: [Bug 1632424] Re: Use a more accurate directory size fudge factor

On Tue, Oct 11, 2016 at 05:58:19PM -0000, Barry Warsaw wrote:
> The reason why this is different than the other bug is because we know
> we *always* need some overhead for file system metadata, but we do not
> always need extra space above and beyond that.

> Here's a thought: what if we create the file system and use statvfs() to
> examine f_bfree/f_bavail. If those numbers are too high, bisect
> downward until we get a good fit. This would be slower but more
> efficient.

In almost all cases, we will be turning around and expanding the filesystem
again on first boot. I don't think there's much value in a slow,
build-the-filesystem-multiple-times approach to try to get an optimal size
for the filesystem, when that "optimal size" is only for minimizing the
number of zeroes we have to write to the disk - in most cases I expect the
mastering of the minimized filesystem would take longer than any time saved
on the side of the disk write. (Ok, 5 seconds saved on each disk image in
the factory for 20,000 devices is worth 20 minutes extra time preparing the
image, but it's still an ugly tradeoff I'd rather we not have to make.)

I think we should instead be trying to work out the necessary size by
drilling down into the actual details of what ext4 needs for metadata space.

Changed in ubuntu-image:
importance: Undecided → Low
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.