171208 16:05:34 [01] Compressing ./cloudpercept47/azure_reservation_orders_azure_tags.ibd to /mnt/restore/innobackupex/2017_12_08/incr_2017_12_08_16_00_01/cloudpercept47/azure_reservation_orders_azure_tags.ibd.delta.qp
171208 16:05:34 [01] ...done
171208 16:05:34 >> log scanned up to (1320480395795)
171208 16:05:34 Executing FLUSH TABLES WITH READ LOCK...
171208 16:05:34 Kill query timeout 900 seconds.
171208 16:05:35 >> log scanned up to (1320480495779)
171208 16:05:36 >> log scanned up to (1320480500250)
171208 16:05:37 >> log scanned up to (1320480500270)
171208 16:05:38 >> log scanned up to (1320480500280)
171208 16:05:39 >> log scanned up to (1320480500290)
171208 16:05:40 >> log scanned up to (1320480500300)
171208 16:05:41 >> log scanned up to (1320480500310)
171208 16:05:42 >> log scanned up to (1320480500320)
171208 16:05:43 >> log scanned up to (1320480500330)
171208 16:05:44 >> log scanned up to (1320480500330)
171208 16:05:45 >> log scanned up to (1320480500350)
171208 16:05:46 >> log scanned up to (1320480500350)
171208 16:05:47 >> log scanned up to (1320480500360)
.....
....
171208 16:20:28 >> log scanned up to (1320480504999)
171208 16:20:29 >> log scanned up to (1320480504999)
171208 16:20:30 >> log scanned up to (1320480504999)
171208 16:20:31 >> log scanned up to (1320480505009)
171208 16:20:32 >> log scanned up to (1320480505009)
171208 16:20:33 >> log scanned up to (1320480505009)
171208 16:20:34 Connecting to MySQL server host: localhost, user: root, password: set, port: 3306, socket: /var/run/mysqld/mysqld.sock
171208 16:20:34 Killing query 8406100 (duration 900 sec): INSERT INTO `aws_instance_usage_hours_monthlies` (`avg_amortized_cost`, `aws_instance_id`, `compute_cost`, `customer_id`, `ebs_io_cost`, `ebs_piops_cost`, `ebs_storage_cost`, `ebs_total_cost`, `ec2_dedicated_tenancy_cost`, `ec2_io_cost`, `ec2_optimized_ebs_surcharge_cost`, `ec2_other_cost`, `full_cost`, `job_execution_version_id`, `month`, `on_demand_hours`, `reserved_hours`, `spot_hours`, `total_cost`, `total_hours`, `used_pct`) VALUES (0.0, 3161096194893, 0.04841926, 889, NULL, NULL, NULL, NULL, 0.0, 0.0000028, 0.0, 0.0, 0.04842206, 3161095937272, '2017-04-01', 0.0, 0.0, 1.0, 0.04842206, 1.0, 0)
171208 16:20:34 >> log scanned up to (1320482193245)
171208 16:20:36 >> log scanned up to (1320494626970)
171208 16:20:38 >> log scanned up to (1320507474637)
16:20:38 UTC - xtrabackup got signal 11 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
innobackupex(my_print_stacktrace+0x2c)[0xd3a2fc]
innobackupex(handle_fatal_signal+0x262)[0xcf9312]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7f864eee1330]
/lib/x86_64-linux-gnu/libc.so.6(+0x3d467)[0x7f864ce05467]
innobackupex[0x7391ec]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8184)[0x7f864eed9184]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f864cec5ffd]
Seems to happen when killing long running queries
Have another example of the issue:
innobackupex --version
innobackupex version 2.4.9 Linux (x86_64) (revision id: a467167cdd4)
command: wait-query- type=all --kill- long-queries- timeout= 900 --kill- long-query- type=all --compress --compress- threads= 4 /mnt/restore/ innobackupex/ 2017_12_ 08/incr_ 2017_12_ 08_16_00_ 01 --no-timestamp --incremental --incremental- basedir= /mnt/restore/ innobackupex/ 2017_12_ 08//incr_ 2017_12_ 08_15_00_ 01/
innobackupex --kill-
171208 16:05:34 [01] Compressing ./cloudpercept4 7/azure_ reservation_ orders_ azure_tags. ibd to /mnt/restore/ innobackupex/ 2017_12_ 08/incr_ 2017_12_ 08_16_00_ 01/cloudpercept 47/azure_ reservation_ orders_ azure_tags. ibd.delta. qp mysqld/ mysqld. sock usage_hours_ monthlies` (`avg_amortized _cost`, `aws_instance_id`, `compute_cost`, `customer_id`, `ebs_io_cost`, `ebs_piops_cost`, `ebs_storage_cost`, `ebs_total_cost`, `ec2_dedicated_ tenancy_ cost`, `ec2_io_cost`, `ec2_optimized_ ebs_surcharge_ cost`, `ec2_other_cost`, `full_cost`, `job_execution_ version_ id`, `month`, `on_demand_hours`, `reserved_hours`, `spot_hours`, `total_cost`, `total_hours`, `used_pct`) VALUES (0.0, 3161096194893, 0.04841926, 889, NULL, NULL, NULL, NULL, 0.0, 0.0000028, 0.0, 0.0, 0.04842206, 3161095937272, '2017-04-01', 0.0, 0.0, 1.0, 0.04842206, 1.0, 0)
171208 16:05:34 [01] ...done
171208 16:05:34 >> log scanned up to (1320480395795)
171208 16:05:34 Executing FLUSH TABLES WITH READ LOCK...
171208 16:05:34 Kill query timeout 900 seconds.
171208 16:05:35 >> log scanned up to (1320480495779)
171208 16:05:36 >> log scanned up to (1320480500250)
171208 16:05:37 >> log scanned up to (1320480500270)
171208 16:05:38 >> log scanned up to (1320480500280)
171208 16:05:39 >> log scanned up to (1320480500290)
171208 16:05:40 >> log scanned up to (1320480500300)
171208 16:05:41 >> log scanned up to (1320480500310)
171208 16:05:42 >> log scanned up to (1320480500320)
171208 16:05:43 >> log scanned up to (1320480500330)
171208 16:05:44 >> log scanned up to (1320480500330)
171208 16:05:45 >> log scanned up to (1320480500350)
171208 16:05:46 >> log scanned up to (1320480500350)
171208 16:05:47 >> log scanned up to (1320480500360)
.....
....
171208 16:20:28 >> log scanned up to (1320480504999)
171208 16:20:29 >> log scanned up to (1320480504999)
171208 16:20:30 >> log scanned up to (1320480504999)
171208 16:20:31 >> log scanned up to (1320480505009)
171208 16:20:32 >> log scanned up to (1320480505009)
171208 16:20:33 >> log scanned up to (1320480505009)
171208 16:20:34 Connecting to MySQL server host: localhost, user: root, password: set, port: 3306, socket: /var/run/
171208 16:20:34 Killing query 8406100 (duration 900 sec): INSERT INTO `aws_instance_
171208 16:20:34 >> log scanned up to (1320482193245)
171208 16:20:36 >> log scanned up to (1320494626970)
171208 16:20:38 >> log scanned up to (1320507474637)
16:20:38 UTC - xtrabackup got signal 11 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Thread pointer: 0x0 my_print_ stacktrace+ 0x2c)[0xd3a2fc] handle_ fatal_signal+ 0x262)[ 0xcf9312] 64-linux- gnu/libpthread. so.0(+0x10330) [0x7f864eee1330 ] 64-linux- gnu/libc. so.6(+0x3d467) [0x7f864ce05467 ] 0x7391ec] 64-linux- gnu/libpthread. so.0(+0x8184) [0x7f864eed9184 ] 64-linux- gnu/libc. so.6(clone+ 0x6d)[0x7f864ce c5ffd]
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
innobackupex(
innobackupex(
/lib/x86_
/lib/x86_
innobackupex[
/lib/x86_
/lib/x86_
Please report a bug at https:/ /bugs.launchpad .net/percona- xtrabackup
+ retcd=2