Could not execute Delete_rows event

Bug #865108 reported by Walter Heck
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MariaDB
New
Undecided
Unassigned

Bug Description

Since a few weeks occasionally replication from a 5.2.8 to a 5.2.8 install fails where it was running fine before with no changes in config according to our puppet files. The only change I see on that server is an upgrade from 5.2.7, but I'm not 100% sure that is when it started happening. the repl stops with the full following:

<pre>
MariaDB [(none)]> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.203
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mariadb-bin.001802
          Read_Master_Log_Pos: 87563717
               Relay_Log_File: relay-bin.000705
                Relay_Log_Pos: 259550619
        Relay_Master_Log_File: mariadb-bin.001797
             Slave_IO_Running: Yes
            Slave_SQL_Running: No
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 1032
                   Last_Error: Could not execute Delete_rows event on table zabbix.history_uint; Can't find record in 'history_uint', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mariadb-bin.001797, end_log_pos 259552671
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 259550472
              Relay_Log_Space: 1353580511
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 1032
               Last_SQL_Error: Could not execute Delete_rows event on table zabbix.history_uint; Can't find record in 'history_uint', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mariadb-bin.001797, end_log_pos 259552671
1 row in set (0.00 sec)
</pre>

I presume you boys need more info, so feel free to ask :)

Revision history for this message
Walter Heck (walterheck) wrote :

after reporting this bug, I was told in IRC that it was probably just data drift. I couldn't dispute that at that time, so I left it. Now I find almost exactly the same problem, except chances are microscopic that it's data drift this time. I cloned a slave by stopping the original slave, rsyncing the datadir and binary logs over and starting it in the new location with the same my.cnf. Within hours the new slave stopped with the following error, while the original machien has been humming along for months.

Could not execute Update_rows event on table yomamma.albums; Can't find record in 'albums', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.002006, end_log_pos 680669835

That seems too much of a coincidence to be data drift, right?

Revision history for this message
Kristian Nielsen (knielsen) wrote :

Well, is it data drift or not?
You need to compare the table between the master and the slave to check.
Or at least check if the row that the replication is complaining about is indeed missing on the slave.

If the row is missing, the problem seems to be data drift, and the bug happened earlier, it is necessary to track down which event was replicated incorrectly to cause this.

If the row is missing, it seems to be a problem with the replication of a specific event, ideally we need the relevant binlog and table data to reproduce.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.