Comment 17 for bug 1607300

Revision history for this message
fimbulvetr (fimbulvetr) wrote :

Ok, here's my follow up.

Around my last response, I had 210,000,000 records in my table. I continued with my 'slow delete' process, roughly 30-50 rows per second for the last 6 days.

Please note this is entirely unscientific.

I've gone from 210,000,000 to 188,396,007 actual rows.

Here are my show table status rows:

PS 5.7.13-6-log - 150,815,123
PS 5.6.31-77.0-log - 185,383,998
5.5.41-tokudb-7.5.5-MariaDB-log - 193,416,244

The most accurate is 5.6. I suspect 5.5.41 could have been more accurate but the problem is that I did not run an analyze on it 6 days ago so difference has been accrued over a much larger time frame. My suspicion is that it would be much more accurate had I done this and that for *my* data churn model, the old style row estimations (toku 7.5.5) were much more accurate.

During this time, no tables had an analyze performed on them, and auto_analyze was off.

At this rate, I expect my PS 5.7.13-6-log to hit a estimated row count of 0 well before the table actually has 0 rows.

I personally would consider this a bug - with auto_analyze being a just barely acceptable workaround as this particular workload is infrequent.