tar4ibd reads files inefficiently
Bug #899931 reported by
Alexey Kopytov
This bug affects 4 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Percona XtraBackup moved to https://jira.percona.com/projects/PXB |
Invalid
|
Undecided
|
Unassigned | ||
1.6 |
Won't Fix
|
Undecided
|
Unassigned | ||
2.0 |
Invalid
|
Undecided
|
Unassigned |
Bug Description
tar4ibd is using very inefficient way to read files. To support compressed tables, it starts with a 1KB block size and calls read() with that size. If page verification fails, it increases the block size, calls lseek() to restore the original position and retries read().
It can be optimized in two ways (basically, by doing what xtrabackup binary does):
- read in large chunks, and then process them with different page sizes if necessary;
- don't consider small block sizes and default to 16KB when zip_size == 0, when the data file is not compressed.
I'm not sure it makes sense to fix this in 1.6, as tar4ibd is hopefully going away in 1.7. But reporting it just for the record.
To post a comment you must log in.
On Sun, 04 Dec 2011 14:53:09 -0000, Alexey Kopytov <email address hidden> wrote:
> Public bug reported:
>
> tar4ibd is using very inefficient way to read files. To support
> compressed tables, it starts with a 1KB block size and calls read() with
> that size. If page verification fails, it increases the block size,
> calls lseek() to restore the original position and retries read().
>
> It can be optimized in two ways (basically, by doing what xtrabackup
> binary does):
>
> - read in large chunks, and then process them with different page sizes if necessary;
> - don't consider small block sizes and default to 16KB when zip_size == 0, when the data file is not compressed.
>
> I'm not sure it makes sense to fix this in 1.6, as tar4ibd is hopefully
> going away in 1.7. But reporting it just for the record.
maybe posix_fadvise( WILLNEED) ?
--
Stewart Smith