Memory leak on a filtered slave (Valgrind: malloc in Log_event::read_log_event)

Bug #1042946 reported by Mike Siekkinen on 2012-08-28
This bug affects 8 people
Affects Status Importance Assigned to Milestone
MySQL Server
Percona Server moved to
Fix Released
Sergei Glushchenko
Fix Released
Sergei Glushchenko
Fix Released
Sergei Glushchenko

Bug Description

I have recently been upgrading various MySQL instances to 5.5.25a. I have one specific machine that experiencing a memory leak. I fear this report may lack any actionable meat but here's what I can tell you:

* The machine that is leaking is replicating an all innodb stream. It has been doing so for years on previous versions with out issue.

* While there are several databases in the binary logs feeding it, it is only capturing about a dozen specific tables from a single database using Replicate_Do_Table

* No other Replicate_* options are filled in

* This machines master also has other slaves that are not using any of the aforementioned filtering and none of them are experiencing this leak.

* No applications talk to this instance. It's role is just to replicate with log-slave-updates in order to generate a smaller bin log to push over the wire to slaves in remote datacenters

* I've read over but none of the suggestions there lead to any likely culprits (no temporary tables or stored procedures are in use)

* It has an innodb_buffer_pool_size of 2200M. On startup it shows memory footprint close to this. Over time top reports a memory footprint growth of about .1G/6 hours. Nagios graphs show this rate is very linear and consistent.

* I have attempted to kill the replication threads going in and out to see if this might indicate something session related to one of those. This had no effect in freeing any memory. The event scheduler thread is running, however there are no actual events to be run here. Similarly stopping that thread does not free any memory.

* My only current recourse is the periodically restart the mysql process :(

Please let me know if there's anything else that would be of use to actually track down the root cause of this bug.

Related branches

Mike Siekkinen (9-mikes) on 2012-08-28
description: updated

Can you provide smaps / smem output of your mysqld ? You can also run it under valgrind with debug symbols installed (not recommend for production since it will run slower). When running under valgrind make sure to provide the valgrind suppression file. You can also use to run it under mysql-test-run (it will provide suppression file automatically).

Mike Siekkinen (9-mikes) wrote :

Thanks, we'll look at getting what you need before. I don't have any experience using valgrind w/ mysql. From what my coworkers told me looking into this it seems we need to make a build from the source with some specific configure flags to enable valgrind to do it's job. We've been running into some issues with this but will report back once we get it figured out. My google-fu, or lack thereof, for smaps/smem only seems to lead to pages with links to defunct project and 404 pages. Can you offer any pointers to where they are currently maintained?

Rebuilding for Valgrind might be not necessary in this case, because we are trying to catch a memory leak, and those should be detectable on release builds with debug symbols just fine. Also note that Valgrind catches the leaks at the shutdown time.

Alex Elman (aelman) wrote :


I've been working with Mike on this issue. To reiterate, we're experiencing a steady unbound linear increase in the amount of memory this mysql instance uses during the course of its run.

So we performed a 12 hour run with Valgrind. I had to bump VG_N_SEGMENTS to a higher constant and rebuild Valgrind because profiling Percona with memcheck for longer than several hours takes a larger table than what Valgrind can do out of the box. Attached is the leak summary. All of the information on what Valgrind settings we used and our kernel build should be inside that file. We used the default suppression file. We're using Percona w/ XtraDB Ver 14.14 Distrib 5.5.25a, for Linux (x86_64) using readline 5.1.

Some interesting things to note, the error about "Conditional jump or move depends on uninitialized value(s)" happens over 10 million times in less than 12 hours. Was this done by design or is this an indication of something more serious?

The leak summary seems to turn up nil but maybe we didn't use Valgrind using the tools which are intended to find a leak of this nature?

Alex Elman (aelman) wrote :

Hi again,

We immediately started up Percona for a second run after the run which produced the leak summary posted above. This ran for 10 hours. I have attached the second leak summary with this comment.

This time it looks like a tiny bit of memory was leaked.


Alex Elman (aelman) wrote :

In case it is relevant, the release version and revision number of Percona 5.5.25a we're using:

Server version: 5.5.25a-27.1-log Percona Server (GPL), Release rel27.1, Revision 277

Alex, Mike -

Thank you. Indeed the second Valgrind log shows a memory leak for three slave events. Assuming a 10x Valgrind slowdown, there should ~18MB of leaked data after the 10h run. I don't think three log events could be that big.

Does the issue happen if you use other replication filtering options, especially replicate-do-db? To check that, master should send events that contain both filtered and passed db names.

By code analysis, exec_relay_log_event() does not always free the event before the return: if update_log_pos() failed and in case of replicated transaction rollback due to a temp error. These do not seem to be related to table filtering though.

Re. "Conditional jump or move depends on uninitialized value(s)", it's probably a bug, but a relatively benign one. It looks like it is triggering once per query.

You can help us to confirm it by starting mysqld under valgrind, executing one query, stopping mysqld. Slave replication should not be necessary.

Aleksandr Khomyakov (toairo) wrote :


In last week i has upgraded about 40 instances from 5.1.57 to 5.5.27. Started from down to up in replication tree. After upgraded all except top mysql (masters), I noticed that some servers are memory leak. about 1Gb in 48hours.

replication tree looks like in this picture. (approximate scheme)

some replicas leaks about 1Gb in 2 days, some about 200M in week.

Replications rules in tree is different.
in one case replicate all data, without any rules (except 1 Replicate_Wild_Ignore_Table)- leak about 500Mb in week

case #2 - replication only 2 tables. leak about 1Gb in 2 Day. i tried configured rules with Replicate_Do_Table and Replicate_Wild_Do_Table, no changes.
innodb_pool_boffer_size = 256Mb, but rss size grow to 1,5Gb

thanks, Alex for suggestion about VG_N_SEGMENTS, because another case valgrind return only "Warning: set address range perms: large range" after shutdown mysqld
using valgrind is new for me.
now run mysql with valgrind on copy production server with memory leak. let's see what will happen in 10-12 hours.

also I began to look older version for check memory leak in them. i found what last stable version - 5.5.24 and downgrade all leaked mysql to 5.5.24.
for production servers memory leaks is very very bad, but i glad what no need downgraded return to 5.1. =)

I am ready to help find problems with leakage.

Aleksandr -

Thanks for you help, please keep up us updated. If I understand you correctly 5.5.24 does not leak memory, and the first version that does is 5.5.25a, is that correct?

Aleksandr Khomyakov (toairo) wrote :

now i not sure, because need more statistics.
yesterday i downgrade all leaked instance to 5.5.24, because this version showed me that does not leak too much how 05.05.27.

today, I see that the server is still leaked, but slower.

1 case: server with full replication, without rules.
        - with 5.5.27 leak average 350Mb per 24h, after downgrade to 5.5.24- leak about 100Mb per 24h.
case #2: with part replication table from master, but most of the load is not of replication
       - with 5.5.27 leak 1Gb per 24h, after downgrade to 5.5.24 - 150Mb per 24h
3 case: replication rules same as case #2, but in case #2 more
      - with 5.5.27 about 500Gb per 24h
      - with 5.5.20: I tried downgrade 5.5.20 before 5.5.24 - and not leaked, but on 5.5.20 worked about 24hours, need try again this version.

continue to monitor the memory.

about valgrind result on test instance.
most of time i not result about leaks, but today night after 4hours work mysql, i get this result (see. attach). but nothing new. in valgrind result from Alex, same trace.

Aleksandr Khomyakov (toairo) wrote :

I forgot to say. in 2 cases leak stopped at version 5.5.24, but I continue to watch.

Aleksandr Khomyakov (toairo) wrote :

and to answer your question, about 5.5.25a, according to my observations, leaks same as 5.5.27.

Aleksandr -

Thanks, we will wait for more updates from you.

Setting this to New as there is at least one confirmed memory leak. It probably cannot explain everything, but it still needs fixing.

This is probably an upstream issue, will need to report it there once it's fully confirmed.

Aleksandr Khomyakov (toairo) wrote :


after a week of observation of mysql v5.5.24, I can say that it is stable. that growth that I have seen 1 day after the rollback, its just the fluctuation within 100MB.

looks like this graph

however in version 5.1, mysql get a certain amount of memory, and held steady level, without fluctuation.

Aleksandr Khomyakov (toairo) wrote :

my experiments with valgrind on test server, showed no leaks. server runs with 2.5GB and growing to 3.3Gb, this growth stops.
I tested with version
- 5.5.27-28.1 Percona Server
- original 5.5.28.

after stopping the server, valgrind shows no leaks.

I supposed that the growing internal buffers. seen in smaps.

for example i attached two state smaps. not from test server. this smaps from production small server that replicates only 2 tables. no customers besides monitoring and service receives the data through replication.

state after the start and after 18 hours.

and attach my.cnf for this instance. i attach them in single archive.
mysql 5.5.27-28.1 Percona Server
OS CentOS 5.5 x86 kernel

Volume RSS first and second smaps
[root@db25 airo]# grep "^Size:" /tmp/smaps_2250 |awk '{ SUM += $2} END { print SUM }'

[root@db25 airo]# grep "^Size:" /tmp/smaps_1715 |awk '{ SUM += $2} END { print SUM }'

Diff 162512Kb

internal smaps 3 suspicious buffers.

1. address start 020af000 - from 14.5mb to 46mb - diff 30.4Mb
2. address start 7fa99c000000 from 7Mb to 62Mb - diff 55Mb
3. additional buffer address start 7fa994000000 - 64Mb

Here is the difference between the initial and final states.

I do not know how to find those buffers.

Ovais Tariq (ovais-tariq) wrote :
Download full text (4.6 KiB)


I have run tests to determine memory usage in conjunction with the report here
My test executes queries that create implicit temporary tables with a big enough tmp_table_size that the implicit tmp tables are always created in memory.

The version of Percona Server used in the tests is 5.5.15-rel21.0-log

First create a table and populate it with data as follows:

  `c2` char(255) DEFAULT NULL,
  PRIMARY KEY (`c1`)

mysql> insert into ti2(c2) values('abc');
Query OK, 1 row affected (0.08 sec)

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 1 row affected (0.08 sec)
Records: 1 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 2 rows affected (0.13 sec)
Records: 2 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 4 rows affected (0.10 sec)
Records: 4 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 8 rows affected (0.14 sec)
Records: 8 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 16 rows affected (0.11 sec)
Records: 16 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 32 rows affected (0.08 sec)
Records: 32 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 64 rows affected (0.11 sec)
Records: 64 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 128 rows affected (0.14 sec)
Records: 128 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 256 rows affected (0.09 sec)
Records: 256 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 512 rows affected (0.15 sec)
Records: 512 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 1024 rows affected (0.18 sec)
Records: 1024 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 2048 rows affected (0.19 sec)
Records: 2048 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 4096 rows affected (0.23 sec)
Records: 4096 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 8192 rows affected (0.39 sec)
Records: 8192 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 16384 rows affected (0.57 sec)
Records: 16384 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 32768 rows affected (1.21 sec)
Records: 32768 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 65536 rows affected (2.84 sec)
Records: 65536 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 131072 rows affected (5.60 sec)
Records: 131072 Duplicates: 0 Warnings: 0

mysql> insert into ti2(c2) select c2 from ti2;
Query OK, 262144 rows affected (10.95 sec)
Records: 262144 Duplicates: 0 Warnings: 0

The table size should be around 152M:
mysql> show table status\G


Ovais Tariq (ovais-tariq) wrote :

I have tested the memory usage in Percona Server 5.5.23, Percona Server 5.5.27-rel29.0 and MySQL 5.5.27 using the script I provided above, and PS 5.5.27-rel29.0 uses a lot more memory as compared to PS 5.5.23 and MySQL 5.5.27. Attached is the log files showing memory usage.

Ovais Tariq (ovais-tariq) wrote :

Again after my latest tests using the script I had attached earlier, versions of MySQL >= 5.5.18 do not show excessive memory usage, however, when comparing Percona Server 5.5.18 to MySQL 5.5.18 I saw PS showing excessive memory usage. I think this has to do with implicit in-memory tables and memory not be released immediately.

@Mike, @Alexsandr:

    1. I see that you noticed this memory leak even without any slave filters (replicate-* options, I also don't see them in the my.cnf attached). Is that right?

    2. Does running a 'FLUSH TABLES' free the memory in your case (witout the restart described in initial description)

This is to check if the issues commented from #17 onwards match the ones before it and also to see if it is related to mysql replication.


What are you referring to by "implicit memory tables"? Because no released Percona Server version contains the query optimizer changes to make use MEMORY instead of MyISAM in more cases.

Aleksandr Khomyakov (toairo) wrote :


   1. not quite, memory leaks have been on multiple servers with different rules of replication.
one case: full replication from master, but have one rule Replicate_Wild_Ignore_Table: mysql.%

  2. I tried, flush tables, it had no effect on rss

Mike Siekkinen (9-mikes) wrote :

@Raghavendra, flush tables does not seem to have any effect for me.

Ovais Tariq (ovais-tariq) wrote :


By "implicit temporary tables" I mean temporary tables created by queries, for example a sub query that needs to create an implicit temporary table, or a query with an ORDER BY clause that cannot use an index. And I am referring to "implicit memory tables" created using MEMORY storage engine and not the on-disk ones.

@Ovais -

Thanks. Indeed that's one possibility where the problem might be. Although we did test the memory usage for explicit MEMORY tables at the feature development time.

Using your test scenario against PS 5.5.28-29.1, I see the memory being released as soon as the queries using the memory are finished:

... <snip> ...
2453036 3716244
2460128 3716244
2428088 3716244
1864296 3191956
175408 1619092
175408 1619092
175408 1619092
175408 1619092
175408 1619092
175408 1619092

172496 1619092
172496 1619092
172496 1619092

So I am not so sure that what you saw is the same issue as being reported here unless I am misinterpreting the results.

Roel Van de Paar (roel11) wrote :

For clarity, the only bug that has been established with 100% certainty so far is this one in Log_event::read_log_event:

==13638== 1,068 (256 direct, 812 indirect) bytes in 1 blocks are definitely lost in loss record 7 of 8
==13638== at 0x4A078B8: malloc (vg_replace_malloc.c:270)
==13638== by 0x7A3361: my_malloc (in /data/chroot_dc_head_new/usr/sbin/mysqld)
==13638== by 0x724D7C: Log_event::read_log_event(char const*, unsigned int, char const**, Format_description_log_event const*) (in /data/chroot_dc_head_new/usr/sbin/mysqld)
==13638== by 0x72856B: Log_event::read_log_event(st_io_cache*, st_mysql_mutex*, Format_description_log_event const*) (in /data/chroot_dc_head_new/usr/sbin/mysqld)
==13638== by 0x52FA09: ??? (in /data/chroot_dc_head_new/usr/sbin/mysqld)
==13638== by 0x53104D: handle_slave_sql (in /data/chroot_dc_head_new/usr/sbin/mysqld)
==13638== by 0x3DC3E0673C: start_thread (in /lib64/
==13638== by 0x3DC2ED3D1C: clone (in /lib64/

(From - From Aleksandr - Note #11)

And another occurrence of the same:

==16132== 256 bytes in 1 blocks are definitely lost in loss record 4 of 8
==16132== at 0x4A07848: malloc (vg_replace_malloc.c:270)
==16132== by 0x7D1FE1: my_malloc (in /usr/local/percona/mysql-5.5.25a/usr/sbin/mysqld)
==16132== by 0x74F6E3: Log_event::read_log_event(char const*, unsigned int, char const**, Format_description_log_event const*) (in /usr/local/percona/mysql-5.5.25a/usr/sbin/mysqld)
==16132== by 0x752558: Log_event::read_log_event(st_io_cache*, st_mysql_mutex*, Format_description_log_event const*) (in /usr/local/percona/mysql-5.5.25a/usr/sbin/mysqld)
==16132== by 0x518993: ??? (in /usr/local/percona/mysql-5.5.25a/usr/sbin/mysqld)
==16132== by 0x52185C: ??? (in /usr/local/percona/mysql-5.5.25a/usr/sbin/mysqld)
==16132== by 0x522D33: handle_slave_sql (in /usr/local/percona/mysql-5.5.25a/usr/sbin/mysqld)
==16132== by 0x3ECD60673C: start_thread (in /lib64/
==16132== by 0x3ECC6D456C: clone (in /lib64/

(From - Alex - Note #5)

Though 256 bytes may not look big, in the same trace there is "possibly lost: 315,964 bytes in 2 blocks" and most of those possibly lost bytes *are also* from Log_event::read_log_event. And, this was only a 4 hour run. It is also likely that only certain field types and/or certain queries etc. cause loss, so it may not be possible to extrapolate numbers, or even reliably produce a Valgrind stack on each run.

I believe both these stacks were seen in connection with replication traffic only? It would be interesting also to see the my.cnf files used by Aleksandr and Alex when these particular issues were observed to see if there are any similarities.

It would further be good to confirm if the issue also shows in the latest 5.5 MySQL. If so, we will report an upstream bug. Aleksandr, Alex, are you at liberty to test this?

Roel -

Perhaps Valgrind Massif would be useful tool here?

summary: - Memory leak on a filtered slave
+ Memory leak on a filtered slave (Valgrind: malloc in
+ Log_event::read_log_event)
Roel Van de Paar (roel11) wrote :

Laurynas, interesting proposal. If further "Log_event::read_log_event" analysis turns out to be a dead road (unlikely I suspect), then Massif may help.

For others who are reading this; the type of issues that Massif finds is completely different:

Massif will detect unused blocks for which pointers remain valid:

'Also, there are certain space leaks that aren't detected by traditional leak-checkers, such as Memcheck's. ** That's because the memory isn't ever actually lost -- a pointer remains to it -- but it's not in use. **Programs that have leaks like this can unnecessarily increase the amount of memory they are using over time.** Massif can help identify these leaks. Importantly, Massif tells you not only how much heap memory your program is using, it also gives very detailed information that indicates which parts of your program are responsible for allocating the heap memory. ' (

While Memcheck finds issues for which pointers are already lost (as in our case with Log_event::read_log_event):

'Definitely lost: [...] This means that no pointer to the block can be found. The block is classified as "lost", because the programmer could not possibly have freed it at program exit, since no pointer to it exists. This is likely a symptom of having lost the pointer at some earlier point in the program. Such cases should be fixed by the programmer.' (

Analyzing "Log_event::read_log_event" further would be the first step.

Roel Van de Paar (roel11) wrote :

See bug #1053342 - especially #8. There may be a substantial connection.

Vadim Tkachenko (vadim-tk) wrote :

Is there a repeatable procedure for this bug ?

I have an impression that there is none, therefore I set the bug to "New" as non-verified.

Possible source of the leak is in following lines:, exec_relay_log_event:

    if (ev->get_type_code() != FORMAT_DESCRIPTION_EVENT &&
      DBUG_PRINT("info", ("Deleting the event after it has been executed"));
      delete ev;

Deferred events are Int-, Rand- and User- var events. However above code *COULD* skip deletion of another types of events *WITHOUT* adding them to deferred_events container, i.e. these events eventually will not be deleted and memory will leak. The reason is implementation of is_deferred_event:
     Returns true if the argument event resides in the containter;
     more specifically, the checking is done against the last added event.
  bool is_deferred_event(Log_event * ev)
    return deferred_events_collecting ? deferred_events->is_last(ev) : false;

bool is_last(Log_event *ev) { return ev == last_added; };

It just checks whether last added event pointer is equal to pointer to event which was added to container by previous call of Deferred_log_events::add:

int Deferred_log_events::add(Log_event *ev)
  last_added= ev;
  insert_dynamic(&array, (uchar*) &ev);
  return 0;

as we can see last_added updated correctly.

However deferred_events usage pattern is following:

 rli->deferred_events= new Deferred_log_events(rli);
  rli->deferred_events= new Deferred_log_events(rli);
  if (received deferred event ev) {
  else {
   execute statement
 delete rli->deferred_events;

Container constantly filled with events and cleared, filled and cleared again innumerable times. When container cleared in following code:
void Deferred_log_events::rewind()
    Reset preceeding Query log event events which execution was
    deferred because of slave side filtering.
  if (!is_empty())
    for (uint i= 0; i < array.elements; i++)
      Log_event *ev= *(Log_event **) dynamic_array_ptr(&array, i);
      delete ev;
    if (array.elements > array.max_element)

last_added still points to already deleted event. Which mean nothing blocks the allocator from allocating memory region with same address (last_added) for the next (unnecessary deferred) event. Which *SURELY* happens innumerable times and leads to memory leaks of pseudo random replication log events.

Suggested patch to fix this is:

=== modified file 'Percona-Server/sql/'
--- Percona-Server/sql/ 2012-04-21 10:24:39 +0000
+++ Percona-Server/sql/ 2012-12-19 08:58:02 +0000
@@ -1115,6 +1115,7 @@
     if (array.elements > array.max_element)
+ last_added= NULL;

Swany (greenlion) wrote :

To determine if it is repl filters you might try a sysbench test on master over many databases (256) of a few tables each with a slave filter on each database or each table. If db or table filters are to blame, should show up quick.

Ovais Tariq (ovais-tariq) wrote :

Hi Justin,

The memory leak is reproducible on a the server with replicate-wild-ignore-table = mysql.% and the leak is not produced on the same server without replicate-wild-ignore-table = mysql.%

Kevin Kwast (fehmn) wrote :

It looks like branch ~sergei.glushchenko/percona-server/ST-27220-bug1042946 fixes the memory leak for us.

Sergei, Ovais, and others at Percona - thank you for the great Christmas gift!

Ovais Tariq (ovais-tariq) wrote :

Kevin that's great :)

fengyi (fengyi) wrote :

I looked into the source code of percona 5.5.18-rel23.0 and didn't find the class *Deferred_log_events*, so this bug will not affact 5.5.18.

Kevin Kwast (fehmn) wrote :

Slave servers that were leaking memory on 5.5.29-rel29.4 look much better with 5.5.29-rel30.0. Thanks for getting the fix released.

Upstream fix in 5.5.31 and 5.6.11.

5.5$ bzr log -r 4175
revno: 4175
committer: Venkatesh Duggirala<email address hidden>
branch nick: mysql-5.5
timestamp: Sat 2013-01-26 15:03:01 +0530

  Due to not resetting a member (last_added) of
  Deferred events class inside a clean up function
  (Deferred_log_events::rewind), there is a memory
  leak on filtered slaves.

  Resetting last_added to NULL in rewind() function.

4674 in 5.6.

Upstream fix in 5.1.70 too.

5.1$ bzr log -r 3968
revno: 3968
committer: Venkatesh Duggirala<email address hidden>
branch nick: mysql-5.1
timestamp: Fri 2013-03-15 08:56:20 +0530

  Back porting fix from mysql-5.5

tags: added: upstream

Percona now uses JIRA for bug reports so this bug report is migrated to:

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.