Comment 3 for bug 77138

Revision history for this message
Yeti (yeti) wrote :

> After all, these are not that hard to support from Perl

This way of thinking is the source of all the problems. Sure, it is not hard to add compression support in a one particular case. But then then is another case, then a couple of them, yet another, a few more, and it never stops. The complexity of every simple private or one-shot script that reads these files is considerably increased.

Does grep support searching in compressed files? Why HTML files aren't compressed? Extending every HTML browser in the world to handle compressed files surely isn't so hard. The largest 8 compressed files in /usr/share/doc on my Ubuntu desktop are PDF files -- and do xpdf or evince support compressed PDF? Nope.

If Debian developers invested the same effort, that went into compression of files in /usr/share/doc, extending every program that needs to read them (failing for every program that is not in Debian and some of those that are) and fixing related bugs... if this effort was invested into compression on the file system level, we would have a working compression in several file systems by now and Debian would save more disk space.

So, please just stop compressing the files at least in this case. Adding decompression is a problem. Not because it is hard, but because it encourages compression and compression creates a burden for other people. I don't want to force everyone who needs to read index.sgml (which *is* supposed to be machine-readable) to implement decompression too.