MySQL crashes while running dbt2 test on the recovered data via Xtrabackup

Reported by XCheng on 2009-10-26
262
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Percona XtraBackup
High
Unassigned

Bug Description

The mysql crashes and restarts when I run 100 warehouses dbt2 test on the data recovered by the Xtrabackup(0.8, 0.9, 0.9.5rc) tool.

Here are the steps to repeat the bug:

 1. Clear the data/log directory, start MySQL
 2. Generate 100 warehouses DBT2 data, load to MySQL(sp included).
 3. Run dbt2 test on it
 4. Do Xtrabackup hotbackup while dbt2 test running
 5. After backup completed, shutdown mysql and remove its original data/logs, kill dbt2 test.
 6. Do Xtrabackup apply log and copy back
 7. Start the Mysql again, and it seems work well, but ...
 8. Run the dbt2 test again, unfortunately the mysql crashed and restart.

 I also tested hotbackup while no dbt2 test or 1 warehouses dbt2 test, no crash happens in these cases. It seems that the Xtrabackup hot backup tool can't work perfectly with heavy workloads.

We need detail information to analyze/reproduce the bug...
Because I did similar test during the development and I didn't find any error for that time.

We need information about....

Hardware information (What type of CPU? etc.)
Software information (What OS, What my.cnf/options for InnoDB/XtraBackup, etc.)
What the error message of the crash.

Without the information, I can do anything for this bug report.
Thank you.

XCheng (csforgood) wrote :
Download full text (5.5 KiB)

I also found that if NOTPM is very high(40k-50k) when backup, the crash is more likely to happen. If the dbt2 TPM is 10k-20k-30k, the crash seldom happened.

MySQL version : 5.0.75
DBT2: dbt2-0.40

Data is stored on SSDs, logs on HD.

my.cnf :

default-storage-engine=innodb
innodb_file_per_table
innodb_buffer_pool_size = 16G
innodb_additional_mem_pool_size = 20M
## Set .._log_file_sizie to 25 % of buffer pool size
innodb_log_file_size = 1G
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT

error message:

InnoDB: Error (2): trying to extend a single-table tablespace 3
InnoDB: by single page(s) though the space size 23360. Page no 23360.
091027 12:52:44InnoDB: Error: trying to access a stray pointer (nil)
InnoDB: buf pool start is at 0x7f3f35880000, end at 0x7f4335880000
InnoDB: Probable reason is database corruption or memory
InnoDB: corruption. If this happens in an InnoDB database recovery, see
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
InnoDB: how to force recovery.
091027 12:52:44InnoDB: Assertion failure in thread 1088379200 in file ./../include/buf0buf.ic line 232
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
InnoDB: about forcing recovery.
091027 12:52:44 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=134217728
read_buffer_size=1048576
max_used_connections=33
max_connections=1000
threads_connected=33
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 3203064 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=0x20422df0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0x40df50e0, backtrace may not be correct.
Bogus stack limit or frame pointer, fp=0x40df50e0, stack_bottom=0x40df0000, thread_stack=262144, aborting backtrace.
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x7f3ee6324628 is invalid pointer
thd->thread_id=23
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
Writing a core file
./bin/mysqld_safe: line 388: 20914 Segmentation fault (core dumped) nohup /usr/local/mysql-5.0/libexec/mysqld --defaults-file=/opt/schooner/mysql/config/my.cnf --basedir=/usr/local/my...

Read more...

XCheng (csforgood) wrote :
Download full text (10.5 KiB)

Go on with...

I tested MySQL 5.1.37 next. Its surprise that no crash happened when I ran 10 warehouses(MySQL 5.0.75 crashed). Then I found that when running 10 warehouses on MySQL 5.1.37, the TPM is only ~20k, but ~40k on MySQL 5.0.75(I dont know what caused this drop, SSD utility is very low). So I increased the warehouse number to 100, and the TPM go up to 40k soon, when I do hotbackup. As a result, the MySQL 5.1.7 crashed !!!

To repeat the bug , you have to promise that your MySQL has high TPM(a least ~40k) when running DBT2 test, no matter how many warehouses you use.

Version: '5.1.37' socket: '/schooner/data/db0/mysql.sock' port: 3307 Source distribution
InnoDB: Error (2): trying to extend a single-table tablespace 3
InnoDB: by single page(s) though the space size 203648. Page no 203648.
091027 14:38:30InnoDB: Error: trying to access a stray pointer (nil)
InnoDB: buf pool start is at 0x7fc0a2b4c000, end at 0x7fc4a2b4c000
InnoDB: Probable reason is database corruption or memory
InnoDB: corruption. If this happens in an InnoDB database recovery, see
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
InnoDB: how to force recovery.
091027 14:38:30 InnoDB: Assertion failure in thread 1086601536 in file ../../storage/innobase/include/buf0buf.ic line 225
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
InnoDB: about forcing recovery.
091027 14:38:30 - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=134217728
read_buffer_size=1048576
max_used_connections=33
max_threads=1000
threads_connected=32
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 3213236 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd: 0x2056fa60
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x40c430f0 thread_stack 0x40000
/usr/local/mysql-5.1-product//libexec/mysqld(my_print_stacktrace+0x32)[0x873db0]
/usr/local/mysql-5.1-product//libexec/mysqld(handle_segfault+0x322)[0x5a2472]
/lib64/libpthread.so.0[0x331400de80]
/lib64/libc.so.6(gsignal+0x35)[0x3313430155]
/lib64/libc.so.6(abort+0x110)[0x3313431bf0]
/usr/local/mysql-5.1-product//libexec/mysqld[0x78aec8]
/usr/local/mysql-5.1-product//libexec/mysqld[0x78c269]
/usr/local/mysql-5.1-product//libexec/mysqld(page_create+0x58)[0x78bdde]
/usr/local/mysql-5.1-product//libexec/mysqld...

Thank you for information.

It seems that file extension of the "xtrabackup --prepare" may be insufficient in some cases.
I will check it.

Changed in percona-xtrabackup:
status: New → Confirmed
importance: Undecided → Medium
importance: Medium → High
XCheng (csforgood) on 2009-10-28
visibility: private → public
visibility: public → private
XCheng (csforgood) on 2009-10-28
visibility: private → public
Changed in percona-xtrabackup:
status: Confirmed → Fix Committed
XCheng (csforgood) wrote :

Thank you very much!

Changed in percona-xtrabackup:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public Security information  Edit
Everyone can see this security related information.

Other bug subscribers