Wrong result with in_to_exists=on in maria-5.3-mwl89

Bug #778434 reported by Philip Stoev
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MariaDB
Fix Released
High
Sergey Petrunia

Bug Description

Not repeatable with maria-5.3. The following query:

SELECT * FROM t1 INNER JOIN t2 ON t2.f10 = t1.f11
WHERE (6, 234) IN (
        SELECT t3.f1, t3.f1
        FROM t3 JOIN t4 ON t4.f11 = t3.f10
);

returns rows even though there is no value of 234 in the database and therefore there is no way for the IN predicate to be TRUE.

EXPLAIN:

id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY t1 system NULL NULL NULL NULL 1
1 PRIMARY t2 system NULL NULL NULL NULL 1
2 DEPENDENT SUBQUERY t3 const PRIMARY PRIMARY 4 const 1 Using index condition
2 DEPENDENT SUBQUERY t4 ALL NULL NULL NULL NULL 21 Using where; Using join buffer (flat, BNL join)

test case:

CREATE TABLE t1 ( f11 int) ;
INSERT IGNORE INTO t1 VALUES (0);

CREATE TABLE t2 ( f10 int) ;
INSERT IGNORE INTO t2 VALUES (0);

CREATE TABLE t3 ( f1 int NOT NULL , f10 int, PRIMARY KEY (f1)) ;
INSERT IGNORE INTO t3 VALUES (6,0),(10,0);

CREATE TABLE t4 ( f11 int) ;
INSERT IGNORE INTO t4 VALUES
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(NULL),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0);

SET SESSION optimizer_switch='materialization=off,in_to_exists=on,subquery_cache=off,semijoin=off';

SELECT * FROM t1 INNER JOIN t2 ON t2.f10 = t1.f11
WHERE (6, 234) IN (
        SELECT t3.f1, t3.f1
        FROM t3 JOIN t4 ON t4.f11 = t3.f10
);

Related branches

Changed in maria:
assignee: nobody → Timour Katchaounov (timour)
milestone: none → 5.3
Changed in maria:
status: New → Confirmed
importance: Undecided → High
Changed in maria:
status: Confirmed → In Progress
Revision history for this message
Timour Katchaounov (timour) wrote :

Reduced test case:

CREATE TABLE t1 ( f11 int) ;
INSERT INTO t1 VALUES (1);

CREATE TABLE t3 ( f1 int NOT NULL , f10 int, PRIMARY KEY (f1)) ;
INSERT INTO t3 VALUES (6,0),(10,0);

CREATE TABLE t4 ( f11 int) ;
INSERT INTO t4 VALUES (0),(1);

SELECT * FROM t1 WHERE (6, 0) IN (SELECT t3.f1, t3.f1 FROM t3, t4 WHERE t4.f11 = t3.f10);

Revision history for this message
Timour Katchaounov (timour) wrote :

The bug is related to index condition pushdown:
set @@optimizer_switch='index_condition_pushdown=on';

My current analysis shows that the condition on table t3 is
pushed to the index as a whole, but apparently is not used
to filter the rows of table t3.

Revision history for this message
Timour Katchaounov (timour) wrote :

Analysis:
The reason why the pushed index condition is not checked is because
in the function ha_myisam::idx_cond_push() the condition

  if (active_index == pushed_idx_cond_keyno)

is false (active_index = 64, pushed_idx_cond_keyno = 0).
As a result, the function mi_set_index_cond_func() is not called,
and when we get to the while loop in mi_rkey():

      while ((info->lastpos >= info->state->data_file_length &&
              (search_flag != HA_READ_KEY_EXACT ||
              last_used_keyseg != keyinfo->seg + keyinfo->keysegs)) ||
             (info->index_cond_func &&
              (res= mi_check_index_cond(info, inx, buf)) == ICP_NO_MATCH))

the member info->index_cond_func is NULL, thus mi_check_index_cond()
is not called at all, and the index condition is not applied.

The method idx_cond_push() is called as follows:

JOIN::optimize -> make_join_readinfo -> push_index_cond ->
ha_myisam::idx_cond_push()

The most likely reason for active_index == MAX_KEY, and
handler::inited == handler::NONE is because idx_cond_push is
called without any call to handler::ha_index_init().

Revision history for this message
Timour Katchaounov (timour) wrote :

This appears to be a bug in ICP, reassigning to SergeyP.

Changed in maria:
status: In Progress → Confirmed
assignee: Timour Katchaounov (timour) → Sergey Petrunia (sergefp)
Changed in maria:
status: Confirmed → Fix Committed
Changed in maria:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.