So, this is essentially what's suggested in the patch (with new variables to control behavior and tests to check all these):
...
+ /* New logaritmic number of pages that are estimated. We
+ first pick minimun from srv_stats_sample_pages and number of
+ pages on index. Then we pick maximum from previous number of
+ pages and log2(number of index pages) * srv_stats_sample_pages. */
if (index->stat_index_size > 0) {
- n_sample_pages = index->stat_index_size;
+ n_sample_pages = ut_max(ut_min(srv_stats_sample_pages, index->stat_index_size),
+ log2(index->stat_index_size)*srv_stats_sample_pages);
} else {
n_sample_pages = 1;
...
This may surely work better than default setting or even any one picked by DBA based on some reasoning. So, I think we should consider this feature seriously (for 5.5 and for tables that do not use persistent statistics in 5.6).
So, this is essentially what's suggested in the patch (with new variables to control behavior and tests to check all these):
... sample_ pages and number of sample_ pages. */ >stat_index_ size > 0) { stat_index_ size; ut_min( srv_stats_ sample_ pages, index-> stat_index_ size), >stat_index_ size)*srv_ stats_sample_ pages);
+ /* New logaritmic number of pages that are estimated. We
+ first pick minimun from srv_stats_
+ pages on index. Then we pick maximum from previous number of
+ pages and log2(number of index pages) * srv_stats_
if (index-
- n_sample_pages = index->
+ n_sample_pages = ut_max(
+ log2(index-
} else {
n_sample_pages = 1;
...
This may surely work better than default setting or even any one picked by DBA based on some reasoning. So, I think we should consider this feature seriously (for 5.5 and for tables that do not use persistent statistics in 5.6).