Barracuda page compression fails to back up.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Percona XtraBackup moved to https://jira.percona.com/projects/PXB |
Confirmed
|
Undecided
|
Jericho Rivera |
Bug Description
MariaDB 10.1.22
# xtrabackup --version
xtrabackup version 2.4.6 based on MySQL server 5.7.13 Linux (x86_64) (revision id: 8ec05b7)
The table is not corrupt as far as I can see. Queries work as usual. But xtrabackup crasher every time on this table.
xtrabackup --user=xxx --password=xxx --backup --target-dir=/xxx
....
170405 19:14:42 [01] Copying ./xxx/crawler_
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
170405 19:14:43 >> log scanned up to (7413012351299)
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Error: failed to read page after 10 retries. File ./xxx/crawler_
[01] xtrabackup: Error: xtrabackup_
[01] xtrabackup: Error: failed to copy datafile.
SHOW CREATE TABLE crawler_
CREATE TABLE `crawler_
`merchant_id` int(11) NOT NULL DEFAULT '0',
`status` enum('new'
`shop_system` enum('unknown'
`shop_
`use_shop_
`shop_speed` enum('fast','slow') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'slow',
`update_crawl` enum('off'
`deep_crawl` enum('off'
`last_update` datetime DEFAULT NULL,
`comment` text COLLATE utf8_unicode_ci,
`encoding` enum('utf8','tis') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'tis',
`force_
`accept_cookies` enum('yes','no') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'no',
`extended_
`prices_
`base_urls` text COLLATE utf8_unicode_ci,
`add_base_
`cut_endless_urls` enum('yes','no') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'yes',
`deep_crawl_type` enum('website'
`start_urls` text COLLATE utf8_unicode_ci,
`test_
`strip_
`pattern_
`pattern_
`pattern_
`pattern_
`start_
`start_
`pattern_
`pattern_
`pattern_
`pattern_
`pattern_
`pattern_
`pattern_name` text COLLATE utf8_unicode_ci,
`pattern_
`pattern_url` text COLLATE utf8_unicode_ci,
`pattern_price` text COLLATE utf8_unicode_ci,
`pattern_
`pattern_
`pattern_shipping` text COLLATE utf8_unicode_ci,
`pattern_
`pattern_brand` text COLLATE utf8_unicode_ci,
`pattern_img` text COLLATE utf8_unicode_ci,
`pattern_img2` text COLLATE utf8_unicode_ci,
`pattern_img3` text COLLATE utf8_unicode_ci,
`pattern_img4` text COLLATE utf8_unicode_ci,
`pattern_img5` text COLLATE utf8_unicode_ci,
`pattern_img6` text COLLATE utf8_unicode_ci,
`pattern_img7` text COLLATE utf8_unicode_ci,
`pattern_img8` text COLLATE utf8_unicode_ci,
`pattern_img9` text COLLATE utf8_unicode_ci,
`timestamp_
`timestamp_
PRIMARY KEY (`merchant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=
SELECT * FROM INFORMATION_
TABLE_ID NAME FLAG N_COLS SPACE FILE_FORMAT ROW_FORMAT ZIP_PAGE_SIZE
3930 xxx/crawler_
2398 xxx_old/
Note: I have this same table in two databases. xxx and xxx_old.
Strangely the problem started happening when i created the table in xxx. Backing up was fine when the table was only in xxx_old.
The table in xxx_old has page_compression=1 and barracuda as well. Literally copy pasted the same show create table statement... Strange.
Marking as confirmed.
170511 07:20:58 >> log scanned up to (1624417) table_stats, old maximum was 0 ibdata1 innodb_ table_stats. ibd to /root/backup/ mysql/innodb_ table_stats. ibd innodb_ index_stats. ibd to /root/backup/ mysql/innodb_ index_stats. ibd gtid_slave_ pos.ibd to /root/backup/ mysql/gtid_ slave_pos. ibd test/t1. ibd copy_datafile( ) failed.
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 1 for mysql/innodb_
170511 07:20:58 [01] Copying ./ibdata1 to /root/backup/
[01] xtrabackup: Page 68 is a doublewrite buffer page, skipping.
[01] xtrabackup: Page 76 is a doublewrite buffer page, skipping.
170511 07:20:58 [01] ...done
170511 07:20:58 [01] Copying ./mysql/
170511 07:20:58 [01] ...done
170511 07:20:58 [01] Copying ./mysql/
170511 07:20:58 [01] ...done
170511 07:20:58 [01] Copying ./mysql/
170511 07:20:58 [01] ...done
170511 07:20:58 [01] Copying ./test/t1.ibd to /root/backup/
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
170511 07:20:59 >> log scanned up to (1624417)
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Database page corruption detected at page 1, retrying...
[01] xtrabackup: Error: failed to read page after 10 retries. File ./test/t1.ibd seems to be corrupted.
[01] xtrabackup: Error: xtrabackup_
[01] xtrabackup: Error: failed to copy datafile.