MariaDB 5.3.3-rc is erroneously (I think) reporting "Out of memory" and then crashing
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MariaDB |
Fix Committed
|
Undecided
|
Unassigned |
Bug Description
Thank you for MariaDB 5.3.3-rc
MariaDB 5.3.3-rc is erroneously (I think) reporting "Out of memory" and then crashing.
From the stack trace it looks like it is creating an internal tmp table.
Sorry, I do not have a reproducer or a core dump. I do have several monitoring systems that collectively run frequently and none show any data that indicate the host (Red Hat EL 5.7 x86_64, 48G RAM, 2G swap, glibc-2.5-65, ) was low on memory (no swap used or swapping, only significant process was mysqld using vsz=28701476 rss=27949352 21s before the crash, 14s before the 2nd "Out of memory" in the error log and 30s before the 1st "Out of memory". The user running mysqld does have a max memory size and virtual memory ulimits of 48G set (I just lowered this to 44G).
One thing I noticed in the crash log is the key_buffer_size=0 , it's not, it is 25769803776 (key_buffer = 24576M), I have noticed this in a few other MariaDB bugs.
I will share a my.cnf file if I can isolate a reproducer, the my.cnf file does have max_heap_table_size = 2048MB (2147483648) in it, while tmp_table_size is the default 16777216. aria_sort_
I'll work on getting a core file to be produced if it crashes again.
120208 22:56:38 [ERROR] mysqld: Out of memory (Needed 2256587384 bytes)
120208 22:56:38 [ERROR] Out of memory; check if mysqld or some other process uses all available memory; if not, you may have to use 'ulimit' to allow mysqld to use more memory or you can add more swap space
120208 22:57:16 [ERROR] mysqld: Out of memory (Needed 2572758680 bytes)
120208 22:57:16 [ERROR] Out of memory; check if mysqld or some other process uses all available memory; if not, you may have to use 'ulimit' to allow mysqld to use more memory or you can add more swap space
120208 22:57:23 [ERROR] mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
Server version: 5.3.3-MariaDB-
key_buffer_size=0
read_buffer_
max_used_
max_threads=153
threads_connected=7
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x2ab0cc9319e0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x402fc0f8 thread_stack 0x48000
./bin/mysqld(
./bin/mysqld(
/lib64/
./bin/mysqld(
./bin/mysqld(
./bin/mysqld(
./bin/mysqld(
./bin/mysqld(
./bin/mysqld [0x6bc385]
./bin/mysqld(
./bin/mysqld(
./bin/mysqld(
./bin/mysqld(
./bin/mysqld(
./bin/mysqld(
./bin/mysqld(
./bin/mysqld(
/lib64/
/lib64/
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x2ab0ce944d48): /* comment removed */ UPDATE p_prelim_org AS org JOIN ( SELECT encore_load, cfterm_id, GROUP_CONCAT( CONCAT(
Connection ID (thread ID): 43272
Status: NOT_KILLED
Optimizer switch: index_merge=
The manual page at http://
information that should help you find out what is causing the crash.
I have a reproducer for the "Out of memory" error being logged (not a crash), it is currently not trivial. buffer_ size from 128MB to 2048MB I no longer saw the "Out of memory" error in the error log, instead the mysql client returns "Using too big key for internal temp tables". Both errors go away when I remove the line: concat_ max_len= 131072
After increasing aria_pagecache_
group_
from the my.cnf file. Without the line the reported global default value of 1024 is being used. I also get the error using 65536, 32768, 16384, 8192, 4096 and 2048, only with 1024 does it run error free.
The queries with the errors are using GROUP_CONCAT.
"Out of memory" query:
UPDATE
pre.encore_ load,
pre.cfterm_ id,
GROUP_ CONCAT( CONCAT("[", fin_sec.label, "]") ORDER BY fin_sec.indent ASC SEPARATOR '' ) AS sub_label,
p_prelim_ org AS pre
p_prelim_ org AS sub_check encore_ load AND
pre. cfterm_ id_match = sub_check.cfterm_id AND
pre. status = "MATCHED" AND
pre. whos_structure = "FINAL" AND
pre. indent_ match + 1 < sub_check.indent AND
pre. table_type = "CF"
p_final_ sections AS fin_sec
pre. cfterm_ id_match = fin_sec.cfterm_id AND
pre. indent_ match >= fin_sec.indent
pre. cfterm_ id = sub_sec.cfterm_id
pre. indent = sub_sec.sub_indent + 1, pre.source = CONCAT( 'Used FINAL struct, ', pre.source )
p_prelim_org AS pre
JOIN ( SELECT
MAX( fin_sec.indent ) AS sub_indent
FROM
JOIN
ON pre.encore_load = sub_check.
JOIN
ON pre.encore_load = fin_sec.encore_load AND
GROUP BY pre.encore_load, pre.cfterm_id
) AS sub_sec
ON pre.encore_load = sub_sec.encore_load AND
SET pre.whos_structure = "PRELIM", pre.section_label = sub_sec.sub_label,
EXPLAIN on the inner select:
******* ******* ******* ****** 1. row ******* ******* ******* ****** ******* ******* ****** 2. row ******* ******* ******* ****** a.pre.encore_ load,ffar_ load_a. pre.cfterm_ id_match ******* ******* ****** 3. row ******* ******* ******* ****** a.pre.encore_ load,ffar_ load_a. pre.cfterm_ id_match
id: 1
select_type: SIMPLE
table: pre
type: ALL
possible_keys: PRIMARY
key: NULL
key_len: NULL
ref: NULL
rows: 76
Extra: Using where; Using filesort
*******
id: 1
select_type: SIMPLE
table: sub_check
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 5
ref: ffar_load_
rows: 1
Extra: Using where
*******
id: 1
select_type: SIMPLE
table: fin_sec
type: ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 5
ref: ffar_load_
rows: 2
Extra: Using index condition
"Using too big key for internal temp tables" query:
UPDATE
encore_ load,
p_prelim_org AS org
JOIN ( SELECT
cfterm_id,
...