Comment 5 for bug 1736177

Revision history for this message
Madolyn Sullivan (madolyns) wrote :

Seems to happen when killing long running queries
Have another example of the issue:

innobackupex --version
innobackupex version 2.4.9 Linux (x86_64) (revision id: a467167cdd4)

command:
innobackupex --kill-wait-query-type=all --kill-long-queries-timeout=900 --kill-long-query-type=all --compress --compress-threads=4 /mnt/restore/innobackupex/2017_12_08/incr_2017_12_08_16_00_01 --no-timestamp --incremental --incremental-basedir=/mnt/restore/innobackupex/2017_12_08//incr_2017_12_08_15_00_01/

171208 16:05:34 [01] Compressing ./cloudpercept47/azure_reservation_orders_azure_tags.ibd to /mnt/restore/innobackupex/2017_12_08/incr_2017_12_08_16_00_01/cloudpercept47/azure_reservation_orders_azure_tags.ibd.delta.qp
171208 16:05:34 [01] ...done
171208 16:05:34 >> log scanned up to (1320480395795)
171208 16:05:34 Executing FLUSH TABLES WITH READ LOCK...
171208 16:05:34 Kill query timeout 900 seconds.
171208 16:05:35 >> log scanned up to (1320480495779)
171208 16:05:36 >> log scanned up to (1320480500250)
171208 16:05:37 >> log scanned up to (1320480500270)
171208 16:05:38 >> log scanned up to (1320480500280)
171208 16:05:39 >> log scanned up to (1320480500290)
171208 16:05:40 >> log scanned up to (1320480500300)
171208 16:05:41 >> log scanned up to (1320480500310)
171208 16:05:42 >> log scanned up to (1320480500320)
171208 16:05:43 >> log scanned up to (1320480500330)
171208 16:05:44 >> log scanned up to (1320480500330)
171208 16:05:45 >> log scanned up to (1320480500350)
171208 16:05:46 >> log scanned up to (1320480500350)
171208 16:05:47 >> log scanned up to (1320480500360)
.....
....
171208 16:20:28 >> log scanned up to (1320480504999)
171208 16:20:29 >> log scanned up to (1320480504999)
171208 16:20:30 >> log scanned up to (1320480504999)
171208 16:20:31 >> log scanned up to (1320480505009)
171208 16:20:32 >> log scanned up to (1320480505009)
171208 16:20:33 >> log scanned up to (1320480505009)
171208 16:20:34 Connecting to MySQL server host: localhost, user: root, password: set, port: 3306, socket: /var/run/mysqld/mysqld.sock
171208 16:20:34 Killing query 8406100 (duration 900 sec): INSERT INTO `aws_instance_usage_hours_monthlies` (`avg_amortized_cost`, `aws_instance_id`, `compute_cost`, `customer_id`, `ebs_io_cost`, `ebs_piops_cost`, `ebs_storage_cost`, `ebs_total_cost`, `ec2_dedicated_tenancy_cost`, `ec2_io_cost`, `ec2_optimized_ebs_surcharge_cost`, `ec2_other_cost`, `full_cost`, `job_execution_version_id`, `month`, `on_demand_hours`, `reserved_hours`, `spot_hours`, `total_cost`, `total_hours`, `used_pct`) VALUES (0.0, 3161096194893, 0.04841926, 889, NULL, NULL, NULL, NULL, 0.0, 0.0000028, 0.0, 0.0, 0.04842206, 3161095937272, '2017-04-01', 0.0, 0.0, 1.0, 0.04842206, 1.0, 0)
171208 16:20:34 >> log scanned up to (1320482193245)
171208 16:20:36 >> log scanned up to (1320494626970)
171208 16:20:38 >> log scanned up to (1320507474637)
16:20:38 UTC - xtrabackup got signal 11 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
innobackupex(my_print_stacktrace+0x2c)[0xd3a2fc]
innobackupex(handle_fatal_signal+0x262)[0xcf9312]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7f864eee1330]
/lib/x86_64-linux-gnu/libc.so.6(+0x3d467)[0x7f864ce05467]
innobackupex[0x7391ec]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8184)[0x7f864eed9184]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f864cec5ffd]

Please report a bug at https://bugs.launchpad.net/percona-xtrabackup
+ retcd=2