Segfault in gcache::RingBuffer::get_new_buffer()

Bug #1152565 reported by Tomasz Klekot on 2013-03-08
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Status tracked in 3.x
Alex Yurchenko
Alex Yurchenko
Percona XtraDB Cluster moved to
Status tracked in 5.6
Fix Released
Fix Released

Bug Description

I noticed that out of nowhere, our cluster shrinked from 3 servers to 2 servers. Seems like one of our nodes crashed approx 2 hours after it synced with the rest of the cluster(the servers runs in GMT+2 tz).
The server was not overloaded and has plenty pf memory:
# free -m
             total used free shared buffers cached
Mem: 15922 12086 3836 0 129 6459
-/+ buffers/cache: 5497 10425
Swap: 1952 132 1820

The server did not face any kind of performance issues, just crashed.

130307 22:09:35 [Note] WSREP: sst_donor_thread signaled with 0
130307 22:09:35 [Note] WSREP: Flushing tables for SST...
130307 22:09:35 [Note] WSREP: Provider paused at f4fc2dac-8761-11e2-0800-6accc9ba6bc6:4558
130307 22:09:35 [Note] WSREP: Tables flushed.
130307 22:13:04 [Note] WSREP: Provider resumed.
130307 22:13:04 [Note] WSREP: 1 (): State transfer to 0 () complete.
130307 22:13:04 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 4587)
130307 22:13:04 [Note] WSREP: Member 1 () synced with group.
130307 22:13:04 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 4587)
130307 22:13:04 [Note] WSREP: Synchronized with group, ready for connections
130307 22:13:04 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
130307 22:13:08 [Note] WSREP: 0 (): State transfer from 1 () complete.
130307 22:13:08 [Note] WSREP: Member 0 () synced with group.
21:59:05 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at

It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 262426320 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
You may download the Percona Server operations manual by visiting You may find information
in the manual which will help you identify the cause of the crash.

We are running:
Server version: 5.5.28-log Percona XtraDB Cluster (GPL), wsrep_23.7.r3821
CentOS release 6.3 (Final)

Related branches

Alex Yurchenko (ayurchen) wrote :

Most likely gcache file got corrupted on disk. Is it reproducible?

Tomasz Klekot (tomksoft) wrote :

So far everything works as it should and I did not manage to observe this problem again. I just rejoined that node (resynced data) to get it back operational.
Regarding file corruption - what I am pretty sure about is it could not be hardware corruption. The OS does not report any filesystem errors, the server runs two brand new Intel SSDs in Raid1.

Alex Yurchenko (ayurchen) wrote :

There was a case when customized rsync SST script corrupted gcache file. So this can't be ruled out. Another possibility is that there is a bug in gcache code, but it is equally improbable. So far this is the only such report. And the code was not changed for years already.

summary: - MySQLd crashed during normal operations
+ Segfault in gcache::RingBuffer::get_new_buffer()

Percona now uses JIRA for bug reports so this bug report is migrated to:

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers