Comment 7 for bug 397745

Revision history for this message
rew (r-e-wolff) wrote :

I'm pretty sure that 67000000 files is not a magic number. I'm pretty sure that 66999999 files will also be horribly slow.

Just the fact that /I/ happen to have 67 million files makes this bug invalid in your eyes?

My last "fight" with fsck resulted in an fsck time of something like three months. Provided your assumption of linear fsck time with file system parameters is valid, this would mean that 100x less files would result in an fsck time of 1 day. Unacceptable to many people. OR 1000 times less files may take 2.4 hours. How do you explain your "not much" answer to your boss after he askes: "what did you do this morning?". "I turned on my workstation this morning and it decided to fsck the root partition, and the fsck took 2.4 hours. So this morning I mostly waited for my machine to boot... "

Over the years people will have more and more files. This will continue to grow. I mentioned the fact that I was "beyond reasonable" back in 1987 with only 2200 files.

I adapt the boot stuff of my fileservers so that they will boot first, (i.e. start all services that depend on just the root) and then they start fscking the data partitions. When done they will mount them and restart services that depend on them (nfs).

This allows me to monitor their progress remotely. Stuff like that. But also I can already use the services that they provide that are not dependent on the large partition with most of the files.

If we have just one partition this is no longer possible: ALL services depend on "fsck of /" .

Another reason why splitting up the disk helps is that fsck is not linear with the disk space. Especially when fsck's working set is beyond the amount of RAM available.

When fsck's temporary storage requirements exceed RAM+swap, you better have a partition available that allows for the storage of the temporary files. Having a root partition that holds the system files is good: The amount of files there will grow naturally. Those that have "more than average" storage and files will have them in /opt or /home or something like that. Likely a separate partition. Thus the temporary files for the /home partition can be set up to live on / .

I work in data-recovery. Some of the recoveries are mostly images. As the number of pixels increases, the effects of jpeg compression increases as well. An image from a 12Mpixel camera is not twice as big as one from a 6 Mpixel camera. So, ten years from now when SLR cameras are 20-50 Mpixels, we'll have 4-8Mbytes of jpg image data. People will shoot and store many more images on their 20T drive. Now do the math of how many images will fit....