tar4ibd reads files inefficiently
tar4ibd is using very inefficient way to read files. To support compressed tables, it starts with a 1KB block size and calls read() with that size. If page verification fails, it increases the block size, calls lseek() to restore the original position and retries read().
It can be optimized in two ways (basically, by doing what xtrabackup binary does):
- read in large chunks, and then process them with different page sizes if necessary;
- don't consider small block sizes and default to 16KB when zip_size == 0, when the data file is not compressed.
I'm not sure it makes sense to fix this in 1.6, as tar4ibd is hopefully going away in 1.7. But reporting it just for the record.