26
26
This file contains:
29
A module that accepts a condition, index (or partitioning) description,
30
and builds lists of intervals (in index/partitioning space), such that
31
all possible records that match the condition are contained within the
29
A module that accepts a condition, index (or partitioning) description,
30
and builds lists of intervals (in index/partitioning space), such that
31
all possible records that match the condition are contained within the
33
33
The entry point for the range analysis module is get_mm_tree() function.
35
35
The lists are returned in form of complicated structure of interlinked
36
36
SEL_TREE/SEL_IMERGE/SEL_ARG objects.
37
See quick_range_seq_next, find_used_partitions for examples of how to walk
37
See quick_range_seq_next, find_used_partitions for examples of how to walk
39
39
All direct "users" of this module are located within this file, too.
46
46
The module has single entry point - prune_partitions() function.
49
Range/index_merge/groupby-minmax optimizer module
50
A module that accepts a table, condition, and returns
49
Range/index_merge/groupby-minmax optimizer module
50
A module that accepts a table, condition, and returns
51
51
- a QUICK_*_SELECT object that can be used to retrieve rows that match
52
the specified condition, or a "no records will match the condition"
52
the specified condition, or a "no records will match the condition"
55
55
The module entry points are
65
65
The code in this file (and elsewhere) makes operations on key value tuples.
66
66
Those tuples are stored in the following format:
68
68
The tuple is a sequence of key part values. The length of key part value
69
69
depends only on its type (and not depends on the what value is stored)
71
71
KeyTuple: keypart1-data, keypart2-data, ...
73
73
The value of each keypart is stored in the following format:
75
75
keypart_data: [isnull_byte] keypart-value-bytes
77
77
If a keypart may have a NULL value (key_part->field->real_maybe_null() can
78
be used to check this), then the first byte is a NULL indicator with the
78
be used to check this), then the first byte is a NULL indicator with the
79
79
following valid values:
80
80
1 - keypart has NULL value.
81
81
0 - keypart has non-NULL value.
87
87
keypart-value-bytes holds the value. Its format depends on the field type.
88
88
The length of keypart-value-bytes may or may not depend on the value being
89
stored. The default is that length is static and equal to
89
stored. The default is that length is static and equal to
90
90
KEY_PART_INFO::length.
92
Key parts with (key_part_flag & HA_BLOB_PART) have length depending of the
92
Key parts with (key_part_flag & HA_BLOB_PART) have length depending of the
95
95
keypart-value-bytes: value_length value_bytes
97
97
The value_length part itself occupies HA_KEY_BLOB_LENGTH=2 bytes.
99
99
See key_copy() and key_restore() for code to move data between index tuple
102
CAUTION: the above description is only sergefp's understanding of the
102
CAUTION: the above description is only sergefp's understanding of the
103
103
subject and may omit some details.
106
106
#include <drizzled/server_includes.h>
107
107
#include <drizzled/sql_select.h>
110
#define test_rb_tree(A,B) {}
111
#define test_use_count(A) {}
108
#include <drizzled/error.h>
109
#include <drizzled/cost_vect.h>
110
#include <drizzled/item/cmpfunc.h>
111
#include <drizzled/field/num.h>
112
#include <drizzled/check_stack_overrun.h>
118
#if defined(CMATH_NAMESPACE)
119
using namespace CMATH_NAMESPACE;
118
126
#define double2rows(x) ((ha_rows)(x))
120
static int sel_cmp(Field *f,uchar *a,uchar *b,uint8_t a_flag,uint8_t b_flag);
128
static int sel_cmp(Field *f,unsigned char *a,unsigned char *b,uint8_t a_flag,uint8_t b_flag);
122
static uchar is_null_string[2]= {1,0};
130
static unsigned char is_null_string[2]= {1,0};
124
132
class RANGE_OPT_PARAM;
126
134
A construction block of the SEL_ARG-graph.
128
The following description only covers graphs of SEL_ARG objects with
136
The following description only covers graphs of SEL_ARG objects with
129
137
sel_arg->type==KEY_RANGE:
131
139
One SEL_ARG object represents an "elementary interval" in form
133
141
min_value <=? table.keypartX <=? max_value
135
143
The interval is a non-empty interval of any kind: with[out] minimum/maximum
136
144
bound, [half]open/closed, single-point interval, etc.
138
146
1. SEL_ARG GRAPH STRUCTURE
140
148
SEL_ARG objects are linked together in a graph. The meaning of the graph
141
149
is better demostrated by an example:
146
154
| part=1 $ part=2 $ part=3
161
169
+-------+ $ $ +--------+
163
171
... $ $ +--------+
165
173
The entire graph is partitioned into "interval lists".
167
175
An interval list is a sequence of ordered disjoint intervals over the same
168
176
key part. SEL_ARG are linked via "next" and "prev" pointers. Additionally,
169
all intervals in the list form an RB-tree, linked via left/right/parent
177
all intervals in the list form an RB-tree, linked via left/right/parent
170
178
pointers. The RB-tree root SEL_ARG object will be further called "root of the
173
In the example pic, there are 4 interval lists:
181
In the example pic, there are 4 interval lists:
174
182
"kp<1 OR kp1=2 OR kp1=3", "kp2=5", "kp3=10 OR kp3=12", "kp3=11 OR kp3=13".
175
183
The vertical lines represent SEL_ARG::next/prev pointers.
177
185
In an interval list, each member X may have SEL_ARG::next_key_part pointer
178
186
pointing to the root of another interval list Y. The pointed interval list
179
187
must cover a key part with greater number (i.e. Y->part > X->part).
181
189
In the example pic, the next_key_part pointers are represented by
182
190
horisontal lines.
186
194
It represents a condition in a special form (we don't have a name for it ATM)
187
195
The SEL_ARG::next/prev is "OR", and next_key_part is "AND".
189
197
For example, the picture represents the condition in form:
190
(kp1 < 1 AND kp2=5 AND (kp3=10 OR kp3=12)) OR
191
(kp1=2 AND (kp3=11 OR kp3=14)) OR
198
(kp1 < 1 AND kp2=5 AND (kp3=10 OR kp3=12)) OR
199
(kp1=2 AND (kp3=11 OR kp3=14)) OR
192
200
(kp1=3 AND (kp3=11 OR kp3=14))
197
205
Use get_mm_tree() to construct SEL_ARG graph from WHERE condition.
198
206
Then walk the SEL_ARG graph and get a list of dijsoint ordered key
199
207
intervals (i.e. intervals in form
201
209
(constA1, .., const1_K) < (keypart1,.., keypartK) < (constB1, .., constB_K)
203
211
Those intervals can be used to access the index. The uses are in:
208
216
and create QUICK_RANGE_SELECT object that will
209
217
read records within these intervals.
211
4. SPACE COMPLEXITY NOTES
219
4. SPACE COMPLEXITY NOTES
213
221
SEL_ARG graph is a representation of an ordered disjoint sequence of
214
222
intervals over the ordered set of index tuple values.
216
224
For multi-part keys, one can construct a WHERE expression such that its
217
225
list of intervals will be of combinatorial size. Here is an example:
219
(keypart1 IN (1,2, ..., n1)) AND
220
(keypart2 IN (1,2, ..., n2)) AND
227
(keypart1 IN (1,2, ..., n1)) AND
228
(keypart2 IN (1,2, ..., n2)) AND
221
229
(keypart3 IN (1,2, ..., n3))
223
231
For this WHERE clause the list of intervals will have n1*n2*n3 intervals
226
234
(keypart1, keypart2, keypart3) = (k1, k2, k3), where 1 <= k{i} <= n{i}
228
236
SEL_ARG graph structure aims to reduce the amount of required space by
229
237
"sharing" the elementary intervals when possible (the pic at the
230
beginning of this comment has examples of such sharing). The sharing may
238
beginning of this comment has examples of such sharing). The sharing may
231
239
prevent combinatorial blowup:
233
241
There are WHERE clauses that have combinatorial-size interval lists but
234
242
will be represented by a compact SEL_ARG graph.
236
(keypartN IN (1,2, ..., n1)) AND
244
(keypartN IN (1,2, ..., n1)) AND
238
(keypart2 IN (1,2, ..., n2)) AND
246
(keypart2 IN (1,2, ..., n2)) AND
239
247
(keypart1 IN (1,2, ..., n3))
241
249
but not in all cases:
244
252
representation but get_mm_tree() and its callees will construct a
245
253
graph of combinatorial size.
247
(keypart1 IN (1,2, ..., n1)) AND
248
(keypart2 IN (1,2, ..., n2)) AND
255
(keypart1 IN (1,2, ..., n1)) AND
256
(keypart2 IN (1,2, ..., n2)) AND
250
258
(keypartN IN (1,2, ..., n3))
255
263
By induction: Let's take any interval on some keypart in the middle:
259
267
Then let's AND it with this interval 'structure' from preceding and
260
268
following keyparts:
262
270
(kp14=c1 AND kp16=c3) OR keypart14=c2) (*)
264
272
We will obtain this SEL_ARG graph:
266
274
kp14 $ kp15 $ kp16
268
276
+---------+ $ +---------+ $ +---------+
269
277
| kp14=c1 |--$-->| kp15=c0 |--$-->| kp16=c3 |
270
278
+---------+ $ +---------+ $ +---------+
272
+---------+ $ +---------+ $
273
| kp14=c2 |--$-->| kp15=c0 | $
274
+---------+ $ +---------+ $
280
+---------+ $ +---------+ $
281
| kp14=c2 |--$-->| kp15=c0 | $
282
+---------+ $ +---------+ $
277
285
Note that we had to duplicate "kp15=c0" and there was no way to avoid
279
287
The induction step: AND the obtained expression with another "wrapping"
280
288
expression like (*).
281
When the process ends because of the limit on max. number of keyparts
289
When the process ends because of the limit on max. number of keyparts
284
292
WHERE clause length is O(3*#max_keyparts)
319
327
SEL_ARG *left,*right; /* R-B tree children */
320
328
SEL_ARG *next,*prev; /* Links for bi-directional interval list */
321
329
SEL_ARG *parent; /* R-B tree parent */
322
SEL_ARG *next_key_part;
330
SEL_ARG *next_key_part;
323
331
enum leaf_color { BLACK,RED } color;
324
332
enum Type { IMPOSSIBLE, MAYBE, MAYBE_KEY, KEY_RANGE } type;
329
337
SEL_ARG(SEL_ARG &);
330
SEL_ARG(Field *,const uchar *, const uchar *);
331
SEL_ARG(Field *field, uint8_t part, uchar *min_value, uchar *max_value,
338
SEL_ARG(Field *,const unsigned char *, const unsigned char *);
339
SEL_ARG(Field *field, uint8_t part, unsigned char *min_value, unsigned char *max_value,
332
340
uint8_t min_flag, uint8_t max_flag, uint8_t maybe_flag);
333
341
SEL_ARG(enum Type type_arg)
334
342
:min_flag(0),elements(1),use_count(1),left(0),right(0),next_key_part(0),
345
353
inline void merge_flags(SEL_ARG *arg) { maybe_flag|=arg->maybe_flag; }
346
354
inline void maybe_smaller() { maybe_flag=1; }
347
355
/* Return true iff it's a single-point null interval */
348
inline bool is_null_interval() { return maybe_null && max_value[0] == 1; }
356
inline bool is_null_interval() { return maybe_null && max_value[0] == 1; }
349
357
inline int cmp_min_to_min(SEL_ARG* arg)
351
359
return sel_cmp(field,min_value, arg->min_value, min_flag, arg->min_flag);
438
446
min_flag=arg->max_flag & NEAR_MAX ? 0 : NEAR_MIN;
440
448
/* returns a number of keypart values (0 or 1) appended to the key buffer */
441
int store_min(uint length, uchar **min_key,uint min_key_flag)
449
int store_min(uint32_t length, unsigned char **min_key,uint32_t min_key_flag)
443
451
/* "(kp1 > c1) AND (kp2 OP c2) AND ..." -> (kp1 > c1) */
444
452
if ((!(min_flag & NO_MIN_RANGE) &&
459
467
/* returns a number of keypart values (0 or 1) appended to the key buffer */
460
int store_max(uint length, uchar **max_key, uint max_key_flag)
468
int store_max(uint32_t length, unsigned char **max_key, uint32_t max_key_flag)
462
470
if (!(max_flag & NO_MAX_RANGE) &&
463
471
!(max_key_flag & (NO_MAX_RANGE | NEAR_MAX)))
478
486
/* returns a number of keypart values appended to the key buffer */
479
int store_min_key(KEY_PART *key, uchar **range_key, uint *range_key_flag)
487
int store_min_key(KEY_PART *key, unsigned char **range_key, uint32_t *range_key_flag)
481
489
SEL_ARG *key_tree= first();
482
uint res= key_tree->store_min(key[key_tree->part].store_length,
490
uint32_t res= key_tree->store_min(key[key_tree->part].store_length,
483
491
range_key, *range_key_flag);
484
492
*range_key_flag|= key_tree->min_flag;
486
494
if (key_tree->next_key_part &&
487
495
key_tree->next_key_part->part == key_tree->part+1 &&
488
496
!(*range_key_flag & (NO_MIN_RANGE | NEAR_MIN)) &&
495
503
/* returns a number of keypart values appended to the key buffer */
496
int store_max_key(KEY_PART *key, uchar **range_key, uint *range_key_flag)
504
int store_max_key(KEY_PART *key, unsigned char **range_key, uint32_t *range_key_flag)
498
506
SEL_ARG *key_tree= last();
499
uint res=key_tree->store_max(key[key_tree->part].store_length,
507
uint32_t res=key_tree->store_max(key[key_tree->part].store_length,
500
508
range_key, *range_key_flag);
501
509
(*range_key_flag)|= key_tree->max_flag;
502
510
if (key_tree->next_key_part &&
570
578
bool is_singlepoint()
573
Check for NEAR_MIN ("strictly less") and NO_MIN_RANGE (-inf < field)
581
Check for NEAR_MIN ("strictly less") and NO_MIN_RANGE (-inf < field)
574
582
flags, and the same for right edge.
576
584
if (min_flag || max_flag)
578
uchar *min_val= min_value;
579
uchar *max_val= max_value;
586
unsigned char *min_val= min_value;
587
unsigned char *max_val= max_value;
604
612
Starting an effort to document this field:
605
(for some i, keys[i]->type == SEL_ARG::IMPOSSIBLE) =>
613
(for some i, keys[i]->type == SEL_ARG::IMPOSSIBLE) =>
606
614
(type == SEL_TREE::IMPOSSIBLE)
608
616
enum Type { IMPOSSIBLE, ALWAYS, MAYBE, KEY, KEY_SMALLER } type;
630
638
/* The members below are filled/used only after get_mm_tree is done */
631
639
key_map ror_scans_map; /* bitmask of ROR scan-able elements in keys */
632
uint n_ror_scans; /* number of set bits in ror_scans_map */
640
uint32_t n_ror_scans; /* number of set bits in ror_scans_map */
634
642
struct st_ror_scan_info **ror_scans; /* list of ROR key scans */
635
643
struct st_ror_scan_info **ror_scans_end; /* last ROR scan */
655
663
Number of indexes used in range analysis (In SEL_TREE::keys only first
656
664
#keys elements are not empty)
661
669
If true, the index descriptions describe real indexes (and it is ok to
662
670
call field->optimize_range(real_keynr[...], ...).
663
671
Otherwise index description describes fake indexes.
665
673
bool using_real_indexes;
667
675
bool remove_jump_scans;
670
678
used_key_no -> table_key_no translation table. Only makes sense if
671
679
using_real_indexes==true
673
uint real_keynr[MAX_KEY];
681
uint32_t real_keynr[MAX_KEY];
674
682
/* Number of SEL_ARG objects allocated by SEL_ARG::clone_tree operations */
675
uint alloced_sel_args;
683
uint32_t alloced_sel_args;
676
684
bool force_default_mrr;
682
690
KEY_PART *key[MAX_KEY]; /* First key parts of keys used in the query */
683
691
int64_t baseflag;
692
uint32_t max_key_part;
685
693
/* Number of ranges in the last checked tree->key */
687
uchar min_key[MAX_KEY_LENGTH+MAX_FIELD_WIDTH],
694
uint32_t range_count;
695
unsigned char min_key[MAX_KEY_LENGTH+MAX_FIELD_WIDTH],
688
696
max_key[MAX_KEY_LENGTH+MAX_FIELD_WIDTH];
689
697
bool quick; // Don't calulate possible keys
691
uint fields_bitmap_size;
699
uint32_t fields_bitmap_size;
692
700
MY_BITMAP needed_fields; /* bitmask of fields needed by the query */
693
701
MY_BITMAP tmp_covered_fields;
695
703
key_map *needed_reg; /* ptr to SQL_SELECT::needed_reg */
697
uint *imerge_cost_buff; /* buffer for index_merge cost estimates */
698
uint imerge_cost_buff_size; /* size of the buffer */
705
uint32_t *imerge_cost_buff; /* buffer for index_merge cost estimates */
706
uint32_t imerge_cost_buff_size; /* size of the buffer */
700
708
/* true if last checked tree->key can be used for ROR-scan */
701
709
bool is_ror_scan;
702
710
/* Number of ranges in the last checked tree->key */
706
714
class TABLE_READ_PLAN;
720
728
Item_func::Functype type,Item *value);
721
729
static SEL_TREE *get_mm_tree(RANGE_OPT_PARAM *param,COND *cond);
723
static bool is_key_scan_ror(PARAM *param, uint keynr, uint8_t nparts);
724
static ha_rows check_quick_select(PARAM *param, uint idx, bool index_only,
725
SEL_ARG *tree, bool update_tbl_stats,
726
uint *mrr_flags, uint *bufsize,
731
static bool is_key_scan_ror(PARAM *param, uint32_t keynr, uint8_t nparts);
732
static ha_rows check_quick_select(PARAM *param, uint32_t idx, bool index_only,
733
SEL_ARG *tree, bool update_tbl_stats,
734
uint32_t *mrr_flags, uint32_t *bufsize,
727
735
COST_VECT *cost);
728
736
//bool update_tbl_stats);
729
/*static ha_rows check_quick_keys(PARAM *param,uint index,SEL_ARG *key_tree,
730
uchar *min_key, uint min_key_flag, int,
731
uchar *max_key, uint max_key_flag, int);
737
/*static ha_rows check_quick_keys(PARAM *param,uint32_t index,SEL_ARG *key_tree,
738
unsigned char *min_key, uint32_t min_key_flag, int,
739
unsigned char *max_key, uint32_t max_key_flag, int);
734
QUICK_RANGE_SELECT *get_quick_select(PARAM *param,uint index,
735
SEL_ARG *key_tree, uint mrr_flags,
736
uint mrr_buf_size, MEM_ROOT *alloc);
742
QUICK_RANGE_SELECT *get_quick_select(PARAM *param,uint32_t index,
743
SEL_ARG *key_tree, uint32_t mrr_flags,
744
uint32_t mrr_buf_size, MEM_ROOT *alloc);
737
745
static TRP_RANGE *get_key_scans_params(PARAM *param, SEL_TREE *tree,
738
746
bool index_read_must_be_used,
739
747
bool update_tbl_stats,
755
763
static void print_sel_tree(PARAM *param, SEL_TREE *tree, key_map *tree_map,
756
764
const char *msg);
757
static void print_ror_scans_arr(TABLE *table, const char *msg,
765
static void print_ror_scans_arr(Table *table, const char *msg,
758
766
struct st_ror_scan_info **start,
759
767
struct st_ror_scan_info **end);
763
771
static SEL_ARG *sel_add(SEL_ARG *key1,SEL_ARG *key2);
764
772
static SEL_ARG *key_or(RANGE_OPT_PARAM *param, SEL_ARG *key1, SEL_ARG *key2);
765
773
static SEL_ARG *key_and(RANGE_OPT_PARAM *param, SEL_ARG *key1, SEL_ARG *key2,
774
uint32_t clone_flag);
767
775
static bool get_range(SEL_ARG **e1,SEL_ARG **e2,SEL_ARG *root1);
768
776
bool get_quick_keys(PARAM *param,QUICK_RANGE_SELECT *quick,KEY_PART *key,
769
SEL_ARG *key_tree, uchar *min_key,uint min_key_flag,
770
uchar *max_key,uint max_key_flag);
777
SEL_ARG *key_tree, unsigned char *min_key,uint32_t min_key_flag,
778
unsigned char *max_key,uint32_t max_key_flag);
771
779
static bool eq_tree(SEL_ARG* a,SEL_ARG *b);
773
781
static SEL_ARG null_element(SEL_ARG::IMPOSSIBLE);
774
static bool null_part_in_key(KEY_PART *key_part, const uchar *key,
782
static bool null_part_in_key(KEY_PART *key_part, const unsigned char *key,
776
784
bool sel_trees_can_be_ored(SEL_TREE *tree1, SEL_TREE *tree2, RANGE_OPT_PARAM* param);
828
836
if (trees_next == trees_end)
830
838
const int realloc_ratio= 2; /* Double size for next round */
831
uint old_elements= (trees_end - trees);
832
uint old_size= sizeof(SEL_TREE**) * old_elements;
833
uint new_size= old_size * realloc_ratio;
839
uint32_t old_elements= (trees_end - trees);
840
uint32_t old_size= sizeof(SEL_TREE**) * old_elements;
841
uint32_t new_size= old_size * realloc_ratio;
834
842
SEL_TREE **new_trees;
835
843
if (!(new_trees= (SEL_TREE**)alloc_root(param->mem_root, new_size)))
999
1007
1 = Got some error (out of memory?)
1002
SQL_SELECT *make_select(TABLE *head, table_map const_tables,
1010
SQL_SELECT *make_select(Table *head, table_map const_tables,
1003
1011
table_map read_tables, COND *conds,
1004
1012
bool allow_null_cond,
1011
1019
if (!conds && !allow_null_cond)
1013
1021
if (!(select= new SQL_SELECT))
1015
1023
*error= 1; // out of memory
1016
return(0); /* purecov: inspected */
1024
return 0; /* purecov: inspected */
1018
1026
select->read_tables=read_tables;
1019
1027
select->const_tables=const_tables;
1070
bool SQL_SELECT::check_quick(Session *session, bool force_quick_range,
1075
return test_quick_select(session, tmp, 0, limit,
1076
force_quick_range, false) < 0;
1080
bool SQL_SELECT::skip_record()
1082
return cond ? cond->val_int() == 0 : 0;
1061
1086
QUICK_SELECT_I::QUICK_SELECT_I()
1062
1087
:max_used_key_length(0),
1063
1088
used_key_parts(0)
1066
QUICK_RANGE_SELECT::QUICK_RANGE_SELECT(THD *thd, TABLE *table, uint key_nr,
1091
QUICK_RANGE_SELECT::QUICK_RANGE_SELECT(Session *session, Table *table, uint32_t key_nr,
1067
1092
bool no_alloc, MEM_ROOT *parent_alloc,
1068
1093
bool *create_error)
1069
1094
:free_file(0),cur_range(NULL),last_range(0),dont_free(0)
1077
1102
key_part_info= head->key_info[index].key_part;
1078
1103
my_init_dynamic_array(&ranges, sizeof(QUICK_RANGE*), 16, 16);
1080
/* 'thd' is not accessible in QUICK_RANGE_SELECT::reset(). */
1081
mrr_buf_size= thd->variables.read_rnd_buff_size;
1105
/* 'session' is not accessible in QUICK_RANGE_SELECT::reset(). */
1106
mrr_buf_size= session->variables.read_rnd_buff_size;
1082
1107
mrr_buf_desc= NULL;
1084
1109
if (!no_alloc && !parent_alloc)
1086
1111
// Allocates everything through the internal memroot
1087
init_sql_alloc(&alloc, thd->variables.range_alloc_block_size, 0);
1088
thd->mem_root= &alloc;
1112
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1113
session->mem_root= &alloc;
1091
1116
memset(&alloc, 0, sizeof(alloc));
1140
file->ha_external_lock(current_thd, F_UNLCK);
1164
file->ha_external_lock(current_session, F_UNLCK);
1145
1169
delete_dynamic(&ranges); /* ranges are allocated in alloc */
1146
1170
free_root(&alloc,MYF(0));
1147
my_free((char*) column_bitmap.bitmap, MYF(MY_ALLOW_ZERO_PTR));
1171
free((char*) column_bitmap.bitmap);
1149
1173
head->column_bitmaps_set(save_read_set, save_write_set);
1150
x_free(mrr_buf_desc);
1155
QUICK_INDEX_MERGE_SELECT::QUICK_INDEX_MERGE_SELECT(THD *thd_param,
1157
:pk_quick_select(NULL), thd(thd_param)
1180
QUICK_INDEX_MERGE_SELECT::QUICK_INDEX_MERGE_SELECT(Session *session_param,
1182
:pk_quick_select(NULL), session(session_param)
1159
1184
index= MAX_KEY;
1161
1186
memset(&read_record, 0, sizeof(read_record));
1162
init_sql_alloc(&alloc, thd->variables.range_alloc_block_size, 0);
1187
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1166
1191
int QUICK_INDEX_MERGE_SELECT::init()
1171
1196
int QUICK_INDEX_MERGE_SELECT::reset()
1205
QUICK_ROR_INTERSECT_SELECT::QUICK_ROR_INTERSECT_SELECT(THD *thd_param,
1230
QUICK_ROR_INTERSECT_SELECT::QUICK_ROR_INTERSECT_SELECT(Session *session_param,
1207
1232
bool retrieve_full_rows,
1208
1233
MEM_ROOT *parent_alloc)
1209
: cpk_quick(NULL), thd(thd_param), need_to_fetch_row(retrieve_full_rows),
1234
: cpk_quick(NULL), session(session_param), need_to_fetch_row(retrieve_full_rows),
1210
1235
scans_inited(false)
1212
1237
index= MAX_KEY;
1214
1239
record= head->record[0];
1215
1240
if (!parent_alloc)
1216
init_sql_alloc(&alloc, thd->variables.range_alloc_block_size, 0);
1241
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1218
1243
memset(&alloc, 0, sizeof(MEM_ROOT));
1219
last_rowid= (uchar*) alloc_root(parent_alloc? parent_alloc : &alloc,
1244
last_rowid= (unsigned char*) alloc_root(parent_alloc? parent_alloc : &alloc,
1220
1245
head->file->ref_length);
1282
1307
/* already have own 'handler' object. */
1287
if (!(file= head->file->clone(thd->mem_root)))
1311
session= head->in_use;
1312
if (!(file= head->file->clone(session->mem_root)))
1290
1315
Manually set the error flag. Note: there seems to be quite a few
1291
1316
places where a failure could cause the server to "hang" the client by
1292
sending no response to a query. ATM those are not real errors because
1293
the storage engine calls in question happen to never fail with the
1294
existing storage engines.
1317
sending no response to a query. ATM those are not real errors because
1318
the storage engine calls in question happen to never fail with the
1319
existing storage engines.
1296
1321
my_error(ER_OUT_OF_RESOURCES, MYF(0)); /* purecov: inspected */
1297
1322
/* Caller will free the memory */
1301
1326
head->column_bitmaps_set(&column_bitmap, &column_bitmap);
1303
if (file->ha_external_lock(thd, F_RDLCK))
1328
if (file->ha_external_lock(session, F_RDLCK))
1306
1331
if (init() || reset())
1308
file->ha_external_lock(thd, F_UNLCK);
1333
file->ha_external_lock(session, F_UNLCK);
1369
1400
if (quick->init_ror_merged_scan(true))
1371
1402
quick->file->extra(HA_EXTRA_KEYREAD_PRESERVE_FIELDS);
1373
1404
while ((quick= quick_it++))
1375
1406
if (quick->init_ror_merged_scan(false))
1377
1408
quick->file->extra(HA_EXTRA_KEYREAD_PRESERVE_FIELDS);
1378
1409
/* All merged scans share the same record buffer in intersection. */
1379
1410
quick->record= head->record[0];
1444
QUICK_ROR_UNION_SELECT::QUICK_ROR_UNION_SELECT(THD *thd_param,
1446
: thd(thd_param), scans_inited(false)
1475
QUICK_ROR_UNION_SELECT::QUICK_ROR_UNION_SELECT(Session *session_param,
1477
: session(session_param), scans_inited(false)
1448
1479
index= MAX_KEY;
1450
1481
rowid_length= table->file->ref_length;
1451
1482
record= head->record[0];
1452
init_sql_alloc(&alloc, thd->variables.range_alloc_block_size, 0);
1453
thd_param->mem_root= &alloc;
1483
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1484
session_param->mem_root= &alloc;
1492
1523
val2 Second merged select
1495
int QUICK_ROR_UNION_SELECT::queue_cmp(void *arg, uchar *val1, uchar *val2)
1526
int QUICK_ROR_UNION_SELECT::queue_cmp(void *arg, unsigned char *val1, unsigned char *val2)
1497
1528
QUICK_ROR_UNION_SELECT *self= (QUICK_ROR_UNION_SELECT*)arg;
1498
1529
return self->head->file->cmp_ref(((QUICK_SELECT_I*)val1)->last_rowid,
1601
1632
use_count=0; elements=1;
1604
SEL_ARG::SEL_ARG(Field *f,const uchar *min_value_arg,
1605
const uchar *max_value_arg)
1635
SEL_ARG::SEL_ARG(Field *f,const unsigned char *min_value_arg,
1636
const unsigned char *max_value_arg)
1606
1637
:min_flag(0), max_flag(0), maybe_flag(0), maybe_null(f->real_maybe_null()),
1607
elements(1), use_count(1), field(f), min_value((uchar*) min_value_arg),
1608
max_value((uchar*) max_value_arg), next(0),prev(0),
1638
elements(1), use_count(1), field(f), min_value((unsigned char*) min_value_arg),
1639
max_value((unsigned char*) max_value_arg), next(0),prev(0),
1609
1640
next_key_part(0),color(BLACK),type(KEY_RANGE)
1611
1642
left=right= &null_element;
1614
1645
SEL_ARG::SEL_ARG(Field *field_,uint8_t part_,
1615
uchar *min_value_, uchar *max_value_,
1646
unsigned char *min_value_, unsigned char *max_value_,
1616
1647
uint8_t min_flag_,uint8_t max_flag_,uint8_t maybe_flag_)
1617
1648
:min_flag(min_flag_),max_flag(max_flag_),maybe_flag(maybe_flag_),
1618
1649
part(part_),maybe_null(field_->real_maybe_null()), elements(1),use_count(1),
1690
1721
Returns -2 or 2 if the ranges where 'joined' like < 2 and >= 2
1693
static int sel_cmp(Field *field, uchar *a, uchar *b, uint8_t a_flag,
1724
static int sel_cmp(Field *field, unsigned char *a, unsigned char *b, uint8_t a_flag,
1694
1725
uint8_t b_flag)
1777
1808
MAX_KEY if no such index was found.
1780
uint get_index_for_order(TABLE *table, ORDER *order, ha_rows limit)
1811
uint32_t get_index_for_order(Table *table, order_st *order, ha_rows limit)
1783
uint match_key= MAX_KEY, match_key_len= MAX_KEY_LENGTH + 1;
1814
uint32_t match_key= MAX_KEY, match_key_len= MAX_KEY_LENGTH + 1;
1786
1817
for (ord= order; ord; ord= ord->next)
1788
1819
return MAX_KEY;
1792
1823
if (!(table->keys_in_use_for_query.is_set(idx)))
1794
1825
KEY_PART_INFO *keyinfo= table->key_info[idx].key_part;
1795
uint n_parts= table->key_info[idx].key_parts;
1799
The below check is sufficient considering we now have either BTREE
1800
indexes (records are returned in order for any index prefix) or HASH
1826
uint32_t n_parts= table->key_info[idx].key_parts;
1830
The below check is sufficient considering we now have either BTREE
1831
indexes (records are returned in order for any index prefix) or HASH
1801
1832
indexes (records are not returned in order for any index prefix).
1803
1834
if (!(table->file->index_flags(idx, 0, 1) & HA_READ_ORDER))
1881
1912
/* Table read plans are allocated on MEM_ROOT and are never deleted */
1882
1913
static void *operator new(size_t size, MEM_ROOT *mem_root)
1883
1914
{ return (void*) alloc_root(mem_root, (uint) size); }
1884
static void operator delete(void *ptr __attribute__((unused)),
1885
size_t size __attribute__((unused)))
1915
static void operator delete(void *, size_t)
1886
1916
{ TRASH(ptr, size); }
1887
static void operator delete(void *ptr __attribute__((unused)),
1888
MEM_ROOT *mem_root __attribute__((unused)))
1917
static void operator delete(void *, MEM_ROOT *)
1889
1918
{ /* Never called */ }
1890
1919
virtual ~TABLE_READ_PLAN() {} /* Remove gcc warning */
1909
1938
SEL_ARG *key; /* set of intervals to be used in "range" method retrieval */
1910
uint key_idx; /* key number in PARAM::key */
1939
uint32_t key_idx; /* key number in PARAM::key */
1941
uint32_t mrr_buf_size;
1914
TRP_RANGE(SEL_ARG *key_arg, uint idx_arg, uint mrr_flags_arg)
1943
TRP_RANGE(SEL_ARG *key_arg, uint32_t idx_arg, uint32_t mrr_flags_arg)
1915
1944
: key(key_arg), key_idx(idx_arg), mrr_flags(mrr_flags_arg)
1917
1946
virtual ~TRP_RANGE() {} /* Remove gcc warning */
1919
QUICK_SELECT_I *make_quick(PARAM *param,
1920
bool retrieve_full_rows __attribute__((unused)),
1921
MEM_ROOT *parent_alloc)
1948
QUICK_SELECT_I *make_quick(PARAM *param, bool, MEM_ROOT *parent_alloc)
1923
1950
QUICK_RANGE_SELECT *quick;
1924
1951
if ((quick= get_quick_select(param, key_idx, key, mrr_flags, mrr_buf_size,
1997
2024
bool have_min, have_max;
1998
2025
KEY_PART_INFO *min_max_arg_part;
1999
uint group_prefix_len;
2000
uint used_key_parts;
2001
uint group_key_parts;
2026
uint32_t group_prefix_len;
2027
uint32_t used_key_parts;
2028
uint32_t group_key_parts;
2002
2029
KEY *index_info;
2005
uchar key_infix[MAX_KEY_LENGTH];
2031
uint32_t key_infix_len;
2032
unsigned char key_infix[MAX_KEY_LENGTH];
2006
2033
SEL_TREE *range_tree; /* Represents all range predicates in the query. */
2007
2034
SEL_ARG *index_tree; /* The SEL_ARG sub-tree corresponding to index_info. */
2008
uint param_idx; /* Index of used key in param->key. */
2035
uint32_t param_idx; /* Index of used key in param->key. */
2009
2036
/* Number of records selected by the ranges in index_tree. */
2011
2038
ha_rows quick_prefix_records;
2013
2040
TRP_GROUP_MIN_MAX(bool have_min_arg, bool have_max_arg,
2014
2041
KEY_PART_INFO *min_max_arg_part_arg,
2015
uint group_prefix_len_arg, uint used_key_parts_arg,
2016
uint group_key_parts_arg, KEY *index_info_arg,
2017
uint index_arg, uint key_infix_len_arg,
2018
uchar *key_infix_arg,
2042
uint32_t group_prefix_len_arg, uint32_t used_key_parts_arg,
2043
uint32_t group_key_parts_arg, KEY *index_info_arg,
2044
uint32_t index_arg, uint32_t key_infix_len_arg,
2045
unsigned char *key_infix_arg,
2019
2046
SEL_TREE *tree_arg, SEL_ARG *index_tree_arg,
2020
uint param_idx_arg, ha_rows quick_prefix_records_arg)
2047
uint32_t param_idx_arg, ha_rows quick_prefix_records_arg)
2021
2048
: have_min(have_min_arg), have_max(have_max_arg),
2022
2049
min_max_arg_part(min_max_arg_part_arg),
2023
2050
group_prefix_len(group_prefix_len_arg), used_key_parts(used_key_parts_arg),
2053
2080
static int fill_used_fields_bitmap(PARAM *param)
2055
TABLE *table= param->table;
2082
Table *table= param->table;
2056
2083
my_bitmap_map *tmp;
2058
2085
param->tmp_covered_fields.bitmap= 0;
2059
2086
param->fields_bitmap_size= table->s->column_bitmap_size;
2060
2087
if (!(tmp= (my_bitmap_map*) alloc_root(param->mem_root,
2106
2133
quick_condition_rows value is obtained as follows:
2108
2135
It is a minimum of E(#output rows) for all considered table access
2109
2136
methods (range and index_merge accesses over various indexes).
2111
2138
The obtained value is not a true E(#rows that satisfy table condition)
2112
2139
but rather a pessimistic estimate. To obtain a true E(#...) one would
2113
2140
need to combine estimates of various access methods, taking into account
2114
2141
correlations between sets of rows they will return.
2116
2143
For example, if values of tbl.key1 and tbl.key2 are independent (a right
2117
2144
assumption if we have no information about their correlation) then the
2118
2145
correct estimate will be:
2120
E(#rows("tbl.key1 < c1 AND tbl.key2 < c2")) =
2147
E(#rows("tbl.key1 < c1 AND tbl.key2 < c2")) =
2121
2148
= E(#rows(tbl.key1 < c1)) / total_rows(tbl) * E(#rows(tbl.key2 < c2)
2123
which is smaller than
2150
which is smaller than
2125
2152
MIN(E(#rows(tbl.key1 < c1), E(#rows(tbl.key2 < c2)))
2127
2154
which is currently produced.
2130
2157
* Change the value returned in quick_condition_rows from a pessimistic
2131
estimate to true E(#rows that satisfy table condition).
2132
(we can re-use some of E(#rows) calcuation code from index_merge/intersection
2158
estimate to true E(#rows that satisfy table condition).
2159
(we can re-use some of E(#rows) calcuation code from index_merge/intersection
2135
2162
* Check if this function really needs to modify keys_to_use, and change the
2136
2163
code to pass it by reference if it doesn't.
2145
2172
1 if found usable ranges and quick select has been successfully created.
2148
int SQL_SELECT::test_quick_select(THD *thd, key_map keys_to_use,
2175
int SQL_SELECT::test_quick_select(Session *session, key_map keys_to_use,
2149
2176
table_map prev_tables,
2150
ha_rows limit, bool force_quick_range,
2177
ha_rows limit, bool force_quick_range,
2151
2178
bool ordered_output)
2154
2181
double scan_time;
2157
2184
needed_reg.clear_all();
2158
2185
quick_keys.clear_all();
2159
2186
if (keys_to_use.is_clear_all())
2161
2188
records= head->file->stats.records;
2163
2190
records++; /* purecov: inspected */
2168
2195
if (limit < records)
2169
2196
read_time= (double) records + scan_time + 1; // Force to use index
2170
2197
else if (read_time <= 2.0 && !force_quick_range)
2171
return(0); /* No need for quick select */
2198
return 0; /* No need for quick select */
2173
2200
keys_to_use.intersect(head->keys_in_use_for_query);
2174
2201
if (!keys_to_use.is_clear_all())
2182
if (check_stack_overrun(thd, 2*STACK_MIN_SIZE, NULL))
2183
return(0); // Fatal error flag is set
2209
if (check_stack_overrun(session, 2*STACK_MIN_SIZE, NULL))
2210
return 0; // Fatal error flag is set
2185
2212
/* set up parameter that is passed to all functions */
2213
param.session= session;
2187
2214
param.baseflag= head->file->ha_table_flags();
2188
2215
param.prev_tables=prev_tables | const_tables;
2189
2216
param.read_tables=read_tables;
2191
2218
param.table=head;
2193
2220
param.mem_root= &alloc;
2194
param.old_root= thd->mem_root;
2221
param.old_root= session->mem_root;
2195
2222
param.needed_reg= &needed_reg;
2196
2223
param.imerge_cost_buff_size= 0;
2197
2224
param.using_real_indexes= true;
2198
2225
param.remove_jump_scans= true;
2199
2226
param.force_default_mrr= ordered_output;
2201
thd->no_errors=1; // Don't warn about NULL
2202
init_sql_alloc(&alloc, thd->variables.range_alloc_block_size, 0);
2228
session->no_errors=1; // Don't warn about NULL
2229
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
2203
2230
if (!(param.key_parts= (KEY_PART*) alloc_root(&alloc,
2204
2231
sizeof(KEY_PART)*
2205
2232
head->s->key_parts)) ||
2206
2233
fill_used_fields_bitmap(¶m))
2235
session->no_errors=0;
2209
2236
free_root(&alloc,MYF(0)); // Return memory & allocator
2210
return(0); // Can't use range
2237
return 0; // Can't use range
2212
2239
key_parts= param.key_parts;
2213
thd->mem_root= &alloc;
2240
session->mem_root= &alloc;
2216
2243
Make an array with description of all key parts of all table keys.
2226
2253
param.key[param.keys]=key_parts;
2227
2254
key_part_info= key_info->key_part;
2228
for (uint part=0 ; part < key_info->key_parts ;
2255
for (uint32_t part=0 ; part < key_info->key_parts ;
2229
2256
part++, key_parts++, key_part_info++)
2231
2258
key_parts->key= param.keys;
2246
2273
/* Calculate cost of full index read for the shortest covering index */
2247
2274
if (!head->covering_keys.is_clear_all())
2249
int key_for_use= find_shortest_key(head, &head->covering_keys);
2250
double key_read_time=
2251
param.table->file->index_only_read_time(key_for_use,
2276
int key_for_use= head->find_shortest_key(&head->covering_keys);
2277
double key_read_time=
2278
param.table->file->index_only_read_time(key_for_use,
2252
2279
rows2double(records)) +
2253
2280
(double) records / TIME_FOR_COMPARE;
2254
2281
if (key_read_time < read_time)
2285
2312
group_trp= get_best_group_min_max(¶m, tree);
2288
param.table->quick_condition_rows= min(group_trp->records,
2315
param.table->quick_condition_rows= cmin(group_trp->records,
2289
2316
head->file->stats.records);
2290
2317
if (group_trp->read_cost < best_read_time)
2352
2379
new_conj_trp= get_best_disjunct_quick(¶m, imerge, best_read_time);
2353
2380
if (new_conj_trp)
2354
set_if_smaller(param.table->quick_condition_rows,
2381
set_if_smaller(param.table->quick_condition_rows,
2355
2382
new_conj_trp->records);
2356
2383
if (!best_conj_trp || (new_conj_trp && new_conj_trp->read_cost <
2357
2384
best_conj_trp->read_cost))
2470
2497
bool pk_is_clustered= param->table->file->primary_key_is_clustered();
2471
2498
bool all_scans_ror_able= true;
2472
2499
bool all_scans_rors= true;
2473
uint unique_calc_buff_size;
2500
uint32_t unique_calc_buff_size;
2474
2501
TABLE_READ_PLAN **roru_read_plans;
2475
2502
TABLE_READ_PLAN **cur_roru_plan;
2476
2503
double roru_index_costs;
2544
2571
/* Calculate cost(rowid_to_row_scan) */
2546
2573
COST_VECT sweep_cost;
2547
JOIN *join= param->thd->lex->select_lex.join;
2574
JOIN *join= param->session->lex->select_lex.join;
2548
2575
bool is_interrupted= test(join && join->tables == 1);
2549
2576
get_sweep_read_cost(param->table, non_cpk_scan_records, is_interrupted,
2557
2584
unique_calc_buff_size=
2558
2585
Unique::get_cost_calc_buff_size((ulong)non_cpk_scan_records,
2559
2586
param->table->file->ref_length,
2560
param->thd->variables.sortbuff_size);
2587
param->session->variables.sortbuff_size);
2561
2588
if (param->imerge_cost_buff_size < unique_calc_buff_size)
2563
2590
if (!(param->imerge_cost_buff= (uint*)alloc_root(param->mem_root,
2564
2591
unique_calc_buff_size)))
2566
2593
param->imerge_cost_buff_size= unique_calc_buff_size;
2570
2597
Unique::get_use_cost(param->imerge_cost_buff, (uint)non_cpk_scan_records,
2571
2598
param->table->file->ref_length,
2572
param->thd->variables.sortbuff_size);
2599
param->session->variables.sortbuff_size);
2573
2600
if (imerge_cost < read_time)
2575
2602
if ((imerge_trp= new (param->mem_root)TRP_INDEX_MERGE))
2577
2604
imerge_trp->read_cost= imerge_cost;
2578
2605
imerge_trp->records= non_cpk_scan_records + cpk_scan_records;
2579
imerge_trp->records= min(imerge_trp->records,
2606
imerge_trp->records= cmin(imerge_trp->records,
2580
2607
param->table->file->stats.records);
2581
2608
imerge_trp->range_scans= range_scans;
2582
2609
imerge_trp->range_scans_end= range_scans + n_child_scans;
2689
2716
typedef struct st_ror_scan_info
2691
uint idx; /* # of used key in param->keys */
2692
uint keynr; /* # of used key in table */
2718
uint32_t idx; /* # of used key in param->keys */
2719
uint32_t keynr; /* # of used key in table */
2693
2720
ha_rows records; /* estimate of # records this scan will return */
2695
2722
/* Set of intervals over key fields that will be used for row retrieval. */
2746
2773
if (!(bitmap_buf= (my_bitmap_map*) alloc_root(param->mem_root,
2747
2774
param->fields_bitmap_size)))
2750
2777
if (bitmap_init(&ror_scan->covered_fields, bitmap_buf,
2751
2778
param->table->s->fields, false))
2753
2780
bitmap_clear_all(&ror_scan->covered_fields);
2755
2782
KEY_PART_INFO *key_part= param->table->key_info[keynr].key_part;
2880
2907
void ror_intersect_cpy(ROR_INTERSECT_INFO *dst, const ROR_INTERSECT_INFO *src)
2882
2909
dst->param= src->param;
2883
memcpy(dst->covered_fields.bitmap, src->covered_fields.bitmap,
2910
memcpy(dst->covered_fields.bitmap, src->covered_fields.bitmap,
2884
2911
no_bytes_in_map(&src->covered_fields));
2885
2912
dst->out_rows= src->out_rows;
2886
2913
dst->is_covering= src->is_covering;
2980
3007
Selectivity of given ROR scan.
2983
static double ror_scan_selectivity(const ROR_INTERSECT_INFO *info,
3010
static double ror_scan_selectivity(const ROR_INTERSECT_INFO *info,
2984
3011
const ROR_SCAN_INFO *scan)
2986
3013
double selectivity_mult= 1.0;
2987
3014
KEY_PART_INFO *key_part= info->param->table->key_info[scan->keynr].key_part;
2988
uchar key_val[MAX_KEY_LENGTH+MAX_FIELD_WIDTH]; /* key values tuple */
2989
uchar *key_ptr= key_val;
3015
unsigned char key_val[MAX_KEY_LENGTH+MAX_FIELD_WIDTH]; /* key values tuple */
3016
unsigned char *key_ptr= key_val;
2990
3017
SEL_ARG *sel_arg, *tuple_arg= NULL;
2991
3018
key_part_map keypart_map= 0;
2992
3019
bool cur_covered;
3083
3110
E(rows_to_retrieve) = #rows_in_table * ror_scan_selectivity(null, scan1) *
3084
3111
ror_scan_selectivity({scan1}, scan2) * ... *
3085
ror_scan_selectivity({scan1,...}, scanN).
3112
ror_scan_selectivity({scan1,...}, scanN).
3087
3114
true ROR scan added to ROR-intersection, cost updated.
3088
3115
false It doesn't make sense to add this ROR scan to this ROR-intersection.
3097
3124
if (selectivity_mult == 1.0)
3099
3126
/* Don't add this scan if it doesn't improve selectivity. */
3103
3130
info->out_rows *= selectivity_mult;
3105
3132
if (is_cpk_scan)
3108
CPK scan is used to filter out rows. We apply filtering for
3135
CPK scan is used to filter out rows. We apply filtering for
3109
3136
each record of every scan. Assuming 1/TIME_FOR_COMPARE_ROWID
3110
3137
per check this gives us:
3112
info->index_scan_costs += rows2double(info->index_records) /
3139
info->index_scan_costs += rows2double(info->index_records) /
3113
3140
TIME_FOR_COMPARE_ROWID;
3128
3155
if (!info->is_covering)
3130
3157
COST_VECT sweep_cost;
3131
JOIN *join= info->param->thd->lex->select_lex.join;
3158
JOIN *join= info->param->session->lex->select_lex.join;
3132
3159
bool is_interrupted= test(join && join->tables == 1);
3133
3160
get_sweep_read_cost(info->param->table, double2rows(info->out_rows),
3134
3161
is_interrupted, &sweep_cost);
3135
3162
info->total_cost += sweep_cost.total_cost();
3207
3234
double read_time,
3208
3235
bool *are_all_covering)
3211
3238
double min_cost= DBL_MAX;
3213
3240
if ((tree->n_ror_scans < 2) || !param->table->file->stats.records)
3217
Step1: Collect ROR-able SEL_ARGs and create ROR_SCAN_INFO for each of
3244
Step1: Collect ROR-able SEL_ARGs and create ROR_SCAN_INFO for each of
3218
3245
them. Also find and save clustered PK scan if there is one.
3220
3247
ROR_SCAN_INFO **cur_ror_scan;
3221
3248
ROR_SCAN_INFO *cpk_scan= NULL;
3223
3250
bool cpk_scan_used= false;
3225
3252
if (!(tree->ror_scans= (ROR_SCAN_INFO**)alloc_root(param->mem_root,
3301
3328
if (intersect_scans_best == intersect_scans)
3306
3333
print_ror_scans_arr(param->table,
3307
3334
"best ROR-intersection",
3308
3335
intersect_scans,
3309
3336
intersect_scans_best);
3311
3338
*are_all_covering= intersect->is_covering;
3312
uint best_num= intersect_scans_best - intersect_scans;
3339
uint32_t best_num= intersect_scans_best - intersect_scans;
3313
3340
ror_intersect_cpy(intersect, intersect_best);
3316
3343
Ok, found the best ROR-intersection of non-CPK key scans.
3317
Check if we should add a CPK scan. If the obtained ROR-intersection is
3344
Check if we should add a CPK scan. If the obtained ROR-intersection is
3318
3345
covering, it doesn't make sense to add CPK scan.
3320
3347
if (cpk_scan && !intersect->is_covering)
3322
if (ror_intersect_add(intersect, cpk_scan, true) &&
3349
if (ror_intersect_add(intersect, cpk_scan, true) &&
3323
3350
(intersect->total_cost < min_cost))
3325
3352
cpk_scan_used= true;
3336
3363
if (!(trp->first_scan=
3337
3364
(ROR_SCAN_INFO**)alloc_root(param->mem_root,
3338
3365
sizeof(ROR_SCAN_INFO*)*best_num)))
3340
3367
memcpy(trp->first_scan, intersect_scans, best_num*sizeof(ROR_SCAN_INFO*));
3341
3368
trp->last_scan= trp->first_scan + best_num;
3342
3369
trp->is_covering= intersect_best->is_covering;
3408
3435
ror_scan_mark= tree->ror_scans;
3410
3437
MY_BITMAP *covered_fields= ¶m->tmp_covered_fields;
3411
if (!covered_fields->bitmap)
3438
if (!covered_fields->bitmap)
3412
3439
covered_fields->bitmap= (my_bitmap_map*)alloc_root(param->mem_root,
3413
3440
param->fields_bitmap_size);
3414
3441
if (!covered_fields->bitmap ||
3415
3442
bitmap_init(covered_fields, covered_fields->bitmap,
3416
3443
param->table->s->fields, false))
3418
3445
bitmap_clear_all(covered_fields);
3420
3447
double total_cost= 0.0f;
3452
3479
total_cost += (*ror_scan_mark)->index_read_cost;
3453
3480
records += (*ror_scan_mark)->records;
3454
3481
if (total_cost > read_time)
3456
3483
/* F=F-covered by first(I) */
3457
3484
bitmap_union(covered_fields, &(*ror_scan_mark)->covered_fields);
3458
3485
all_covered= bitmap_is_subset(¶m->needed_fields, covered_fields);
3459
3486
} while ((++ror_scan_mark < ror_scans_end) && !all_covered);
3461
3488
if (!all_covered || (ror_scan_mark - tree->ror_scans) == 1)
3465
3492
Ok, [tree->ror_scans .. ror_scan) holds covering index_intersection with
3475
3502
(TIME_FOR_COMPARE_ROWID * M_LN2);
3477
3504
if (total_cost > read_time)
3480
3507
TRP_ROR_INTERSECT *trp;
3481
3508
if (!(trp= new (param->mem_root) TRP_ROR_INTERSECT))
3483
uint best_num= (ror_scan_mark - tree->ror_scans);
3510
uint32_t best_num= (ror_scan_mark - tree->ror_scans);
3484
3511
if (!(trp->first_scan= (ROR_SCAN_INFO**)alloc_root(param->mem_root,
3485
3512
sizeof(ROR_SCAN_INFO*)*
3488
3515
memcpy(trp->first_scan, tree->ror_scans, best_num*sizeof(ROR_SCAN_INFO*));
3489
3516
trp->last_scan= trp->first_scan + best_num;
3490
3517
trp->is_covering= true;
3491
3518
trp->read_cost= total_cost;
3492
3519
trp->records= records;
3493
3520
trp->cpk_scan= NULL;
3494
set_if_smaller(param->table->quick_condition_rows, records);
3521
set_if_smaller(param->table->quick_condition_rows, records);
3508
3535
(except for clustered PK indexes)
3509
3536
update_tbl_stats true <=> update table->quick_* with information
3510
3537
about range scans we've evaluated.
3511
read_time Maximum cost. i.e. don't create read plans with
3538
read_time Maximum cost. i.e. don't create read plans with
3512
3539
cost > read_time.
3515
Find the best "range" table read plan for given SEL_TREE.
3516
The side effects are
3542
Find the best "range" table read plan for given SEL_TREE.
3543
The side effects are
3517
3544
- tree->ror_scans is updated to indicate which scans are ROR scans.
3518
3545
- if update_tbl_stats=true then table->quick_* is updated with info
3519
3546
about every possible range scan.
3526
3553
static TRP_RANGE *get_key_scans_params(PARAM *param, SEL_TREE *tree,
3527
bool index_read_must_be_used,
3554
bool index_read_must_be_used,
3528
3555
bool update_tbl_stats,
3529
3556
double read_time)
3532
3559
SEL_ARG **key,**end, **key_to_read= NULL;
3533
3560
ha_rows best_records= 0;
3534
uint best_mrr_flags= 0, best_buf_size= 0;
3561
uint32_t best_mrr_flags= 0, best_buf_size= 0;
3535
3562
TRP_RANGE* read_plan= NULL;
3537
3564
Note that there may be trees that have type SEL_TREE::KEY but contain no
3548
3575
ha_rows found_records;
3549
3576
COST_VECT cost;
3550
3577
double found_read_time;
3551
uint mrr_flags, buf_size;
3552
uint keynr= param->real_keynr[idx];
3578
uint32_t mrr_flags, buf_size;
3579
uint32_t keynr= param->real_keynr[idx];
3553
3580
if ((*key)->type == SEL_ARG::MAYBE_KEY ||
3554
3581
(*key)->maybe_flag)
3555
3582
param->needed_reg->set_bit(keynr);
3557
bool read_index_only= index_read_must_be_used ||
3584
bool read_index_only= index_read_must_be_used ||
3558
3585
param->table->covering_keys.is_set(keynr);
3560
3587
found_records= check_quick_select(param, idx, read_index_only, *key,
3598
QUICK_SELECT_I *TRP_INDEX_MERGE::make_quick(PARAM *param,
3599
bool retrieve_full_rows __attribute__((unused)),
3600
MEM_ROOT *parent_alloc __attribute__((unused)))
3625
QUICK_SELECT_I *TRP_INDEX_MERGE::make_quick(PARAM *param, bool, MEM_ROOT *)
3602
3627
QUICK_INDEX_MERGE_SELECT *quick_imerge;
3603
3628
QUICK_RANGE_SELECT *quick;
3604
3629
/* index_merge always retrieves full rows, ignore retrieve_full_rows */
3605
if (!(quick_imerge= new QUICK_INDEX_MERGE_SELECT(param->thd, param->table)))
3630
if (!(quick_imerge= new QUICK_INDEX_MERGE_SELECT(param->session, param->table)))
3608
3633
quick_imerge->records= records;
3675
QUICK_SELECT_I *TRP_ROR_UNION::make_quick(PARAM *param,
3676
bool retrieve_full_rows __attribute__((unused)),
3677
MEM_ROOT *parent_alloc __attribute__((unused)))
3700
QUICK_SELECT_I *TRP_ROR_UNION::make_quick(PARAM *param, bool, MEM_ROOT *)
3679
3702
QUICK_ROR_UNION_SELECT *quick_roru;
3680
3703
TABLE_READ_PLAN **scan;
3683
3706
It is impossible to construct a ROR-union that will not retrieve full
3684
3707
rows, ignore retrieve_full_rows parameter.
3686
if ((quick_roru= new QUICK_ROR_UNION_SELECT(param->thd, param->table)))
3709
if ((quick_roru= new QUICK_ROR_UNION_SELECT(param->session, param->table)))
3688
3711
for (scan= first_ror; scan != last_ror; scan++)
3690
3713
if (!(quick= (*scan)->make_quick(param, false, &quick_roru->alloc)) ||
3691
3714
quick_roru->push_quick_back(quick))
3694
3717
quick_roru->records= records;
3695
3718
quick_roru->read_time= read_cost;
3710
3733
gt_value constant that field should be greaterr
3711
3734
cmp_type compare type for the field
3714
3737
# Pointer to tree built tree
3718
static SEL_TREE *get_ne_mm_tree(RANGE_OPT_PARAM *param, Item_func *cond_func,
3741
static SEL_TREE *get_ne_mm_tree(RANGE_OPT_PARAM *param, Item_func *cond_func,
3720
3743
Item *lt_value, Item *gt_value,
3721
3744
Item_result cmp_type)
3744
3767
value constant in the predicate
3745
3768
cmp_type compare type for the field
3746
3769
inv true <> NOT cond_func is considered
3747
(makes sense only when cond_func is BETWEEN or IN)
3770
(makes sense only when cond_func is BETWEEN or IN)
3750
3773
Pointer to the tree built tree
3753
static SEL_TREE *get_func_mm_tree(RANGE_OPT_PARAM *param, Item_func *cond_func,
3776
static SEL_TREE *get_func_mm_tree(RANGE_OPT_PARAM *param, Item_func *cond_func,
3754
3777
Field *field, Item *value,
3755
3778
Item_result cmp_type, bool inv)
3814
3837
We get here for conditions in form "t.key NOT IN (c1, c2, ...)",
3815
where c{i} are constants. Our goal is to produce a SEL_TREE that
3838
where c{i} are constants. Our goal is to produce a SEL_TREE that
3816
3839
represents intervals:
3818
3841
($MIN<t.key<c1) OR (c1<t.key<c2) OR (c2<t.key<c3) OR ... (*)
3820
3843
where $MIN is either "-inf" or NULL.
3822
3845
The most straightforward way to produce it is to convert NOT IN
3823
3846
into "(t.key != c1) AND (t.key != c2) AND ... " and let the range
3824
3847
analyzer to build SEL_TREE from that. The problem is that the
3829
3852
Another problem with big lists like (*) is that a big list is
3830
3853
unlikely to produce a good "range" access, while considering that
3831
range access will require expensive CPU calculations (and for
3854
range access will require expensive CPU calculations (and for
3832
3855
MyISAM even index accesses). In short, big NOT IN lists are rarely
3833
3856
worth analyzing.
3840
3863
#define NOT_IN_IGNORE_THRESHOLD 1000
3841
3864
MEM_ROOT *tmp_root= param->mem_root;
3842
param->thd->mem_root= param->old_root;
3865
param->session->mem_root= param->old_root;
3844
3867
Create one Item_type constant object. We'll need it as
3845
3868
get_mm_parts only accepts constant values wrapped in Item_Type
3847
3870
We create the Item on param->mem_root which points to
3848
per-statement mem_root (while thd->mem_root is currently pointing
3871
per-statement mem_root (while session->mem_root is currently pointing
3849
3872
to mem_root local to range optimizer).
3851
3874
Item *value_item= func->array->create_item();
3852
param->thd->mem_root= tmp_root;
3875
param->session->mem_root= tmp_root;
3854
3877
if (func->array->count > NOT_IN_IGNORE_THRESHOLD || !value_item)
3857
3880
/* Get a SEL_TREE for "(-inf|NULL) < X < c_0" interval. */
3861
3884
func->array->value_to_item(i, value_item);
3862
3885
tree= get_mm_parts(param, cond_func, field, Item_func::LT_FUNC,
3899
3922
new_interval->min_flag= NEAR_MIN;
3903
3926
The following doesn't try to allocate memory so no need to
3904
3927
check for NULL.
3906
3929
tree= tree_or(param, tree, tree2);
3910
3933
if (tree && tree->type != SEL_TREE::IMPOSSIBLE)
3913
Get the SEL_TREE for the last "c_last < X < +inf" interval
3936
Get the SEL_TREE for the last "c_last < X < +inf" interval
3914
3937
(value_item cotains c_last already)
3916
3939
tree2= get_mm_parts(param, cond_func, field, Item_func::GT_FUNC,
3929
3952
for (arg= func->arguments()+2, end= arg+func->argument_count()-2;
3930
3953
arg < end ; arg++)
3932
tree= tree_and(param, tree, get_ne_mm_tree(param, cond_func, field,
3955
tree= tree_and(param, tree, get_ne_mm_tree(param, cond_func, field,
3933
3956
*arg, *arg, cmp_type));
3940
3963
tree= get_mm_parts(param, cond_func, field, Item_func::EQ_FUNC,
3941
3964
func->arguments()[1], cmp_type);
3945
3968
for (arg= func->arguments()+2, end= arg+func->argument_count()-2;
3946
3969
arg < end ; arg++)
3948
tree= tree_or(param, tree, get_mm_parts(param, cond_func, field,
3971
tree= tree_or(param, tree, get_mm_parts(param, cond_func, field,
3949
3972
Item_func::EQ_FUNC,
3950
3973
*arg, cmp_type));
3959
3982
Here the function for the following predicates are processed:
3960
3983
<, <=, =, >=, >, LIKE, IS NULL, IS NOT NULL.
3961
3984
If the predicate is of the form (value op field) it is handled
3992
4015
c is a constant, the function builds a conjunction of all SEL_TREES that can
3993
4016
be obtained by the substitution of f for all different fields equal to f.
3996
4019
If the WHERE condition contains a predicate (fi op c),
3997
4020
then not only SELL_TREE for this predicate is built, but
3998
4021
the trees for the results of substitution of fi for
3999
4022
each fj belonging to the same multiple equality as fi
4000
4023
are built as well.
4001
E.g. for WHERE t1.a=t2.a AND t2.a > 10
4024
E.g. for WHERE t1.a=t2.a AND t2.a > 10
4002
4025
a SEL_TREE for t2.a > 10 will be built for quick select from t2
4004
4027
a SEL_TREE for t1.a > 10 will be built for quick select from t1.
4006
4029
A BETWEEN predicate of the form (fi [NOT] BETWEEN c1 AND c2) is treated
4009
4032
Yet a predicate of the form (c BETWEEN f1i AND f2i) is processed
4010
4033
differently. It is considered as a conjuction of two SARGable
4011
4034
predicates (f1i <= c) and (f2i <=c) and the function get_full_func_mm_tree
4012
is called for each of them separately producing trees for
4013
AND j (f1j <=c ) and AND j (f2j <= c)
4035
is called for each of them separately producing trees for
4036
AND j (f1j <=c ) and AND j (f2j <= c)
4014
4037
After this these two trees are united in one conjunctive tree.
4015
4038
It's easy to see that the same tree is obtained for
4016
4039
AND j,k (f1j <=c AND f2k<=c)
4017
which is equivalent to
4040
which is equivalent to
4018
4041
AND j,k (c BETWEEN f1j AND f2k).
4019
4042
The validity of the processing of the predicate (c NOT BETWEEN f1i AND f2i)
4020
4043
which equivalent to (f1i > c OR f2i < c) is not so obvious. Here the
4021
4044
function get_full_func_mm_tree is called for (f1i > c) and (f2i < c)
4022
4045
producing trees for AND j (f1j > c) and AND j (f2j < c). Then this two
4023
trees are united in one OR-tree. The expression
4046
trees are united in one OR-tree. The expression
4024
4047
(AND j (f1j > c) OR AND j (f2j < c)
4025
4048
is equivalent to the expression
4026
AND j,k (f1j > c OR f2k < c)
4027
which is just a translation of
4049
AND j,k (f1j > c OR f2k < c)
4050
which is just a translation of
4028
4051
AND j,k (c NOT BETWEEN f1j AND f2k)
4030
4053
In the cases when one of the items f1, f2 is a constant c1 we do not create
4037
4060
As to IN predicates only ones of the form (f IN (c1,...,cn)),
4038
4061
where f1 is a field and c1,...,cn are constant, are considered as
4039
4062
SARGable. We never try to narrow the index scan using predicates of
4040
the form (c IN (c1,...,f,...,cn)).
4063
the form (c IN (c1,...,f,...,cn)).
4043
4066
Pointer to the tree representing the built conjunction of SEL_TREEs
4046
4069
static SEL_TREE *get_full_func_mm_tree(RANGE_OPT_PARAM *param,
4047
4070
Item_func *cond_func,
4048
Item_field *field_item, Item *value,
4071
Item_field *field_item, Item *value,
4051
4074
SEL_TREE *tree= 0;
4105
4128
while ((item=li++))
4107
4130
SEL_TREE *new_tree=get_mm_tree(param,item);
4108
if (param->thd->is_fatal_error ||
4131
if (param->session->is_fatal_error ||
4109
4132
param->alloced_sel_args > SEL_ARG::MAX_SEL_ARGS)
4110
return(0); // out of memory
4133
return 0; // out of memory
4111
4134
tree=tree_and(param,tree,new_tree);
4112
4135
if (tree && tree->type == SEL_TREE::IMPOSSIBLE)
4124
4147
SEL_TREE *new_tree=get_mm_tree(param,item);
4126
return(0); // out of memory
4149
return 0; // out of memory
4127
4150
tree=tree_or(param,tree,new_tree);
4128
4151
if (!tree || tree->type == SEL_TREE::ALWAYS)
4136
4159
if (cond->const_item())
4139
During the cond->val_int() evaluation we can come across a subselect
4140
item which may allocate memory on the thd->mem_root and assumes
4141
all the memory allocated has the same life span as the subselect
4162
During the cond->val_int() evaluation we can come across a subselect
4163
item which may allocate memory on the session->mem_root and assumes
4164
all the memory allocated has the same life span as the subselect
4142
4165
item itself. So we have to restore the thread's mem_root here.
4144
4167
MEM_ROOT *tmp_root= param->mem_root;
4145
param->thd->mem_root= param->old_root;
4168
param->session->mem_root= param->old_root;
4146
4169
tree= cond->val_int() ? new(tmp_root) SEL_TREE(SEL_TREE::ALWAYS) :
4147
4170
new(tmp_root) SEL_TREE(SEL_TREE::IMPOSSIBLE);
4148
param->thd->mem_root= tmp_root;
4171
param->session->mem_root= tmp_root;
4182
4205
Concerning the code below see the NOTES section in
4183
4206
the comments for the function get_full_func_mm_tree()
4185
for (uint i= 1 ; i < cond_func->arg_count ; i++)
4208
for (uint32_t i= 1 ; i < cond_func->arg_count ; i++)
4187
4210
if (cond_func->arguments()[i]->real_item()->type() == Item::FIELD_ITEM)
4189
4212
field_item= (Item_field*) (cond_func->arguments()[i]->real_item());
4190
SEL_TREE *tmp= get_full_func_mm_tree(param, cond_func,
4213
SEL_TREE *tmp= get_full_func_mm_tree(param, cond_func,
4191
4214
field_item, (Item*)(intptr_t)i, inv);
4193
4216
tree= !tree ? tmp : tree_or(param, tree, tmp);
4195
4218
tree= tree_and(param, tree, tmp);
4208
4231
Item_func_in *func=(Item_func_in*) cond_func;
4209
4232
if (func->key_item()->real_item()->type() != Item::FIELD_ITEM)
4211
4234
field_item= (Item_field*) (func->key_item()->real_item());
4212
4235
ftree= get_full_func_mm_tree(param, cond_func, field_item, NULL, inv);
4215
4238
case Item_func::MULT_EQUAL_FUNC:
4217
Item_equal *item_equal= (Item_equal *) cond;
4240
Item_equal *item_equal= (Item_equal *) cond;
4218
4241
if (!(value= item_equal->get_const()))
4220
4243
Item_equal_iterator it(*item_equal);
4221
4244
ref_tables= value->used_tables();
4222
4245
while ((field_item= it++))
4258
4281
static SEL_TREE *
4259
4282
get_mm_parts(RANGE_OPT_PARAM *param, COND *cond_func, Field *field,
4260
4283
Item_func::Functype type,
4262
Item_result cmp_type __attribute__((unused)))
4284
Item *value, Item_result)
4264
4286
if (field->table != param->table)
4267
4289
KEY_PART *key_part = param->key_parts;
4268
4290
KEY_PART *end = param->key_parts_end;
4269
4291
SEL_TREE *tree=0;
4271
4293
value->used_tables() & ~(param->prev_tables | param->read_tables))
4273
4295
for (; key_part != end ; key_part++)
4275
4297
if (field->eq(key_part->field))
4277
4299
SEL_ARG *sel_arg=0;
4278
4300
if (!tree && !(tree=new SEL_TREE()))
4280
4302
if (!value || !(value->used_tables() & ~param->read_tables))
4282
4304
sel_arg=get_mm_leaf(param,cond_func,
4294
4316
// This key may be used later
4295
4317
if (!(sel_arg= new SEL_ARG(SEL_ARG::MAYBE_KEY)))
4298
sel_arg->part=(uchar) key_part->part;
4320
sel_arg->part=(unsigned char) key_part->part;
4299
4321
tree->keys[key_part->key]=sel_add(tree->keys[key_part->key],sel_arg);
4300
4322
tree->keys_map.set_bit(key_part->key);
4309
4331
get_mm_leaf(RANGE_OPT_PARAM *param, COND *conf_func, Field *field,
4310
4332
KEY_PART *key_part, Item_func::Functype type,Item *value)
4312
uint maybe_null=(uint) field->real_maybe_null();
4334
uint32_t maybe_null=(uint) field->real_maybe_null();
4313
4335
bool optimize_range;
4314
4336
SEL_ARG *tree= 0;
4315
4337
MEM_ROOT *alloc= param->mem_root;
4317
4339
ulong orig_sql_mode;
4323
4345
the argument can be any, e.g. a subselect. The subselect
4324
4346
items, in turn, assume that all the memory allocated during
4325
4347
the evaluation has the same life span as the item itself.
4326
TODO: opt_range.cc should not reset thd->mem_root at all.
4348
TODO: opt_range.cc should not reset session->mem_root at all.
4328
param->thd->mem_root= param->old_root;
4350
param->session->mem_root= param->old_root;
4329
4351
if (!value) // IS NULL or IS NOT NULL
4331
4353
if (field->table->maybe_null) // Can't use a key on this
4376
4398
bool like_error;
4377
4399
char buff1[MAX_FIELD_WIDTH];
4378
uchar *min_str,*max_str;
4400
unsigned char *min_str,*max_str;
4379
4401
String tmp(buff1,sizeof(buff1),value->collation.collation),*res;
4380
4402
size_t length, offset, min_length, max_length;
4381
uint field_length= field->pack_length()+maybe_null;
4403
uint32_t field_length= field->pack_length()+maybe_null;
4383
4405
if (!optimize_range)
4467
4489
/* For comparison purposes allow invalid dates like 2000-01-32 */
4468
4490
orig_sql_mode= field->table->in_use->variables.sql_mode;
4469
4491
if (value->real_item()->type() == Item::STRING_ITEM &&
4470
(field->type() == DRIZZLE_TYPE_NEWDATE ||
4492
(field->type() == DRIZZLE_TYPE_DATE ||
4471
4493
field->type() == DRIZZLE_TYPE_DATETIME))
4472
4494
field->table->in_use->variables.sql_mode|= MODE_INVALID_DATES;
4473
4495
err= value->save_in_field_no_warnings(field, 1);
4499
4521
but a non-zero time part was cut off.
4501
4523
In MySQL's SQL dialect, DATE and DATETIME are compared as datetime
4502
values. Index over a DATE column uses DATE comparison. Changing
4524
values. Index over a DATE column uses DATE comparison. Changing
4503
4525
from one comparison to the other is possible:
4505
4527
datetime(date_col)< '2007-12-10 12:34:55' -> date_col<='2007-12-10'
4544
4566
field->table->in_use->variables.sql_mode= orig_sql_mode;
4545
str= (uchar*) alloc_root(alloc, key_part->store_length+1);
4567
str= (unsigned char*) alloc_root(alloc, key_part->store_length+1);
4548
4570
if (maybe_null)
4549
*str= (uchar) field->is_real_null(); // Set to 1 if null
4571
*str= (unsigned char) field->is_real_null(); // Set to 1 if null
4550
4572
field->get_key_image(str+maybe_null, key_part->length,
4551
4573
key_part->image_type);
4552
4574
if (!(tree= new (alloc) SEL_ARG(field, str, str)))
4693
4715
key_map result_keys;
4694
4716
result_keys.clear_all();
4696
4718
/* Join the trees key per key */
4697
4719
SEL_ARG **key1,**key2,**end;
4698
4720
for (key1= tree1->keys,key2= tree2->keys,end=key1+param->keys ;
4699
4721
key1 != end ; key1++,key2++)
4702
4724
if (*key1 || *key2)
4704
4726
if (*key1 && !(*key1)->simple_key())
4738
4760
using index_merge.
4741
bool sel_trees_can_be_ored(SEL_TREE *tree1, SEL_TREE *tree2,
4763
bool sel_trees_can_be_ored(SEL_TREE *tree1, SEL_TREE *tree2,
4742
4764
RANGE_OPT_PARAM* param)
4744
4766
key_map common_keys= tree1->keys_map;
4745
4767
common_keys.intersect(tree2->keys_map);
4747
4769
if (common_keys.is_clear_all())
4750
4772
/* trees have a common key, check if they refer to same key part */
4751
4773
SEL_ARG **key1,**key2;
4752
for (uint key_no=0; key_no < param->keys; key_no++)
4774
for (uint32_t key_no=0; key_no < param->keys; key_no++)
4754
4776
if (common_keys.is_set(key_no))
4792
4814
call tree_and(tree1, tree2) -- this joins SEL_ARGs into a usable SEL_ARG
4795
4817
There is an exception though: when we construct index_merge SEL_TREE,
4796
4818
any SEL_ARG* tree that cannot be used to construct quick range select can
4797
4819
be removed, because current range analysis code doesn't provide any way
4798
4820
that tree could be later combined with another tree.
4799
4821
Consider an example: we should not construct
4801
merges = SEL_IMERGE {
4802
SEL_TREE(t.key1part1 = 1),
4823
merges = SEL_IMERGE {
4824
SEL_TREE(t.key1part1 = 1),
4803
4825
SEL_TREE(t.key2part2 = 2) -- (*)
4807
- (*) cannot be used to construct quick range select,
4808
- There is no execution path that would cause (*) to be converted to
4829
- (*) cannot be used to construct quick range select,
4830
- There is no execution path that would cause (*) to be converted to
4809
4831
a tree that could be used.
4811
4833
The latter is easy to verify: first, notice that the only way to convert
4812
4834
(*) into a usable tree is to call tree_and(something, (*)).
4814
4836
Second look at what tree_and/tree_or function would do when passed a
4815
SEL_TREE that has the structure like st1 tree has, and conlcude that
4837
SEL_TREE that has the structure like st1 tree has, and conlcude that
4816
4838
tree_and(something, (*)) will not be called.
4912
4934
/* one tree is index merge tree and another is range tree */
4913
4935
if (tree1->merges.is_empty())
4914
swap_variables(SEL_TREE*, tree1, tree2);
4936
std::swap(tree1, tree2);
4916
4938
if (param->remove_jump_scans && remove_nonrange_trees(param, tree2))
4917
4939
return(new SEL_TREE(SEL_TREE::ALWAYS));
4918
4940
/* add tree2 to tree1->merges, checking if it collapses to ALWAYS */
4929
4951
/* And key trees where key1->part < key2 -> part */
4931
4953
static SEL_ARG *
4932
and_all_keys(RANGE_OPT_PARAM *param, SEL_ARG *key1, SEL_ARG *key2,
4954
and_all_keys(RANGE_OPT_PARAM *param, SEL_ARG *key1, SEL_ARG *key2,
4955
uint32_t clone_flag)
4936
4958
ulong use_count=key1->use_count;
5030
5052
if (key1->type == SEL_ARG::MAYBE_KEY)
5031
5053
{ // Both are maybe key
5032
key1->next_key_part=key_and(param, key1->next_key_part,
5054
key1->next_key_part=key_and(param, key1->next_key_part,
5033
5055
key2->next_key_part, clone_flag);
5034
5056
if (key1->next_key_part &&
5035
5057
key1->next_key_part->type == SEL_ARG::IMPOSSIBLE)
5535
5557
if (root == &null_element)
5536
return(0); // Maybe root later
5558
return 0; // Maybe root later
5537
5559
if (remove_color == BLACK)
5538
5560
root=rb_delete_fixup(root,nod,fix_par);
5539
5562
test_rb_tree(root,root->parent);
5563
#endif /* EXTRA_DEBUG */
5541
5565
root->use_count=this->use_count; // Fix root counters
5542
5566
root->elements=this->elements-1;
5728
5755
return 0; // Found end of tree
5729
5756
if (element->parent != parent)
5731
sql_print_error("Wrong tree: Parent doesn't point at parent");
5758
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Parent doesn't point at parent");
5734
5761
if (element->color == SEL_ARG::RED &&
5735
5762
(element->left->color == SEL_ARG::RED ||
5736
5763
element->right->color == SEL_ARG::RED))
5738
sql_print_error("Wrong tree: Found two red in a row");
5765
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Found two red in a row");
5741
5768
if (element->left == element->right && element->left != &null_element)
5742
5769
{ // Dummy test
5743
sql_print_error("Wrong tree: Found right == left");
5770
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Found right == left");
5746
5773
count_l=test_rb_tree(element->left,element);
5771
5798
This function counts how many times the node "key" is referred (via
5772
SEL_ARG::next_key_part) by
5773
- intervals of RB-tree pointed by "root",
5774
- intervals of RB-trees that are pointed by SEL_ARG::next_key_part from
5799
SEL_ARG::next_key_part) by
5800
- intervals of RB-tree pointed by "root",
5801
- intervals of RB-trees that are pointed by SEL_ARG::next_key_part from
5775
5802
intervals of RB-tree pointed by "root",
5778
Here is an example (horizontal links represent next_key_part pointers,
5779
vertical links - next/prev prev pointers):
5805
Here is an example (horizontal links represent next_key_part pointers,
5806
vertical links - next/prev prev pointers):
5782
5809
|root|-----------------+
5834
5861
void SEL_ARG::test_use_count(SEL_ARG *root)
5837
5864
if (this == root && use_count != 1)
5839
sql_print_information("Use_count: Wrong count %lu for root",use_count);
5866
errmsg_printf(ERRMSG_LVL_INFO, "Use_count: Wrong count %lu for root",use_count);
5842
5869
if (this->type != SEL_ARG::KEY_RANGE)
5849
5876
ulong count=count_key_part_usage(root,pos->next_key_part);
5850
5877
if (count > pos->next_key_part->use_count)
5852
sql_print_information("Use_count: Wrong count for key at 0x%lx, %lu "
5879
errmsg_printf(ERRMSG_LVL_INFO, "Use_count: Wrong count for key at 0x%lx, %lu "
5853
5880
"should be %lu", (long unsigned int)pos,
5854
5881
pos->next_key_part->use_count, count);
5869
5896
****************************************************************************/
5871
5898
/* MRR range sequence, SEL_ARG* implementation: stack entry */
5872
typedef struct st_range_seq_entry
5899
typedef struct st_range_seq_entry
5875
5902
Pointers in min and max keys. They point to right-after-end of key
5876
5903
images. The 0-th entry has these pointing to key tuple start.
5878
uchar *min_key, *max_key;
5905
unsigned char *min_key, *max_key;
5881
5908
Flags, for {keypart0, keypart1, ... this_keypart} subtuple.
5882
5909
min_key_flag may have NULL_RANGE set.
5884
uint min_key_flag, max_key_flag;
5911
uint32_t min_key_flag, max_key_flag;
5886
5913
/* Number of key parts */
5887
uint min_key_parts, max_key_parts;
5914
uint32_t min_key_parts, max_key_parts;
5888
5915
SEL_ARG *key_tree;
5889
5916
} RANGE_SEQ_ENTRY;
5895
5922
typedef struct st_sel_arg_range_seq
5897
uint keyno; /* index of used tree in SEL_TREE structure */
5898
uint real_keyno; /* Number of the index in tables */
5924
uint32_t keyno; /* index of used tree in SEL_TREE structure */
5925
uint32_t real_keyno; /* Number of the index in tables */
5900
5927
SEL_ARG *start; /* Root node of the traversed SEL_ARG* graph */
5902
5929
RANGE_SEQ_ENTRY stack[MAX_REF_PARTS];
5903
5930
int i; /* Index of last used element in the above array */
5905
5932
bool at_start; /* true <=> The traversal has just started */
5906
5933
} SEL_ARG_RANGE_SEQ;
5914
5941
init_params SEL_ARG tree traversal context
5915
n_ranges [ignored] The number of ranges obtained
5942
n_ranges [ignored] The number of ranges obtained
5916
5943
flags [ignored] HA_MRR_SINGLE_POINT, HA_MRR_FIXED_KEY
5919
5946
Value of init_param
5922
range_seq_t sel_arg_range_seq_init(void *init_param,
5923
uint n_ranges __attribute__((unused)),
5924
uint flags __attribute__((unused)))
5949
range_seq_t sel_arg_range_seq_init(void *init_param, uint32_t, uint32_t)
5926
5951
SEL_ARG_RANGE_SEQ *seq= (SEL_ARG_RANGE_SEQ*)init_param;
5927
5952
seq->at_start= true;
6038
6063
Walk right-up while we can
6040
6065
walk_right_n_up:
6041
while (key_tree->next_key_part && key_tree->next_key_part != &null_element &&
6066
while (key_tree->next_key_part && key_tree->next_key_part != &null_element &&
6042
6067
key_tree->next_key_part->part == key_tree->part + 1 &&
6043
6068
key_tree->next_key_part->type == SEL_ARG::KEY_RANGE)
6046
6071
RANGE_SEQ_ENTRY *cur= &seq->stack[seq->i];
6047
uint min_key_length= cur->min_key - seq->param->min_key;
6048
uint max_key_length= cur->max_key - seq->param->max_key;
6049
uint len= cur->min_key - cur[-1].min_key;
6072
uint32_t min_key_length= cur->min_key - seq->param->min_key;
6073
uint32_t max_key_length= cur->max_key - seq->param->max_key;
6074
uint32_t len= cur->min_key - cur[-1].min_key;
6050
6075
if (!(min_key_length == max_key_length &&
6051
6076
!memcmp(cur[-1].min_key, cur[-1].max_key, len) &&
6052
6077
!key_tree->min_flag && !key_tree->max_flag))
6054
6079
seq->param->is_ror_scan= false;
6055
6080
if (!key_tree->min_flag)
6056
cur->min_key_parts +=
6081
cur->min_key_parts +=
6057
6082
key_tree->next_key_part->store_min_key(seq->param->key[seq->keyno],
6059
6084
&cur->min_key_flag);
6060
6085
if (!key_tree->max_flag)
6061
cur->max_key_parts +=
6086
cur->max_key_parts +=
6062
6087
key_tree->next_key_part->store_max_key(seq->param->key[seq->keyno],
6064
6089
&cur->max_key_flag);
6070
6095
Ok, current atomic interval is in form "t.field=const" and there is
6071
6096
next_key_part interval. Step right, and walk up from there.
6084
6109
/* Ok got a tuple */
6085
6110
RANGE_SEQ_ENTRY *cur= &seq->stack[seq->i];
6087
6112
range->ptr= (char*)(int)(key_tree->part);
6089
6114
range->range_flag= cur->min_key_flag | cur->max_key_flag;
6091
6116
range->start_key.key= seq->param->min_key;
6092
6117
range->start_key.length= cur->min_key - seq->param->min_key;
6093
6118
range->start_key.keypart_map= make_prev_keypart_map(cur->min_key_parts);
6094
range->start_key.flag= (cur->min_key_flag & NEAR_MIN ? HA_READ_AFTER_KEY :
6119
range->start_key.flag= (cur->min_key_flag & NEAR_MIN ? HA_READ_AFTER_KEY :
6095
6120
HA_READ_KEY_EXACT);
6097
6122
range->end_key.key= seq->param->max_key;
6098
6123
range->end_key.length= cur->max_key - seq->param->max_key;
6099
range->end_key.flag= (cur->max_key_flag & NEAR_MAX ? HA_READ_BEFORE_KEY :
6124
range->end_key.flag= (cur->max_key_flag & NEAR_MAX ? HA_READ_BEFORE_KEY :
6100
6125
HA_READ_AFTER_KEY);
6101
6126
range->end_key.keypart_map= make_prev_keypart_map(cur->max_key_parts);
6129
6154
seq->param->range_count++;
6130
seq->param->max_key_part=max(seq->param->max_key_part,(uint)key_tree->part);
6155
seq->param->max_key_part=cmax(seq->param->max_key_part,(uint)key_tree->part);
6136
Calculate cost and E(#rows) for a given index and intervals tree
6161
Calculate cost and E(#rows) for a given index and intervals tree
6139
6164
check_quick_select()
6163
ha_rows check_quick_select(PARAM *param, uint idx, bool index_only,
6164
SEL_ARG *tree, bool update_tbl_stats,
6165
uint *mrr_flags, uint *bufsize, COST_VECT *cost)
6188
ha_rows check_quick_select(PARAM *param, uint32_t idx, bool index_only,
6189
SEL_ARG *tree, bool update_tbl_stats,
6190
uint32_t *mrr_flags, uint32_t *bufsize, COST_VECT *cost)
6167
6192
SEL_ARG_RANGE_SEQ seq;
6168
6193
RANGE_SEQ_IF seq_if = {sel_arg_range_seq_init, sel_arg_range_seq_next};
6169
6194
handler *file= param->table->file;
6171
uint keynr= param->real_keynr[idx];
6196
uint32_t keynr= param->real_keynr[idx];
6173
6198
/* Handle cases when we don't have a valid non-empty list of range */
6175
6200
return(HA_POS_ERROR);
6189
6214
param->is_ror_scan= true;
6190
6215
if (file->index_flags(keynr, 0, true) & HA_KEY_SCAN_NOT_ROR)
6191
6216
param->is_ror_scan= false;
6193
6218
*mrr_flags= param->force_default_mrr? HA_MRR_USE_DEFAULT_IMPL: 0;
6194
6219
*mrr_flags|= HA_MRR_NO_ASSOCIATION;
6196
6221
bool pk_is_clustered= file->primary_key_is_clustered();
6198
6223
(file->index_flags(keynr, param->max_key_part, 1) & HA_KEYREAD_ONLY) &&
6199
6224
!(pk_is_clustered && keynr == param->table->s->primary_key))
6200
6225
*mrr_flags |= HA_MRR_INDEX_ONLY;
6202
if (current_thd->lex->sql_command != SQLCOM_SELECT)
6227
if (current_session->lex->sql_command != SQLCOM_SELECT)
6203
6228
*mrr_flags |= HA_MRR_USE_DEFAULT_IMPL;
6205
*bufsize= param->thd->variables.read_rnd_buff_size;
6230
*bufsize= param->session->variables.read_rnd_buff_size;
6206
6231
rows= file->multi_range_read_info_const(keynr, &seq_if, (void*)&seq, 0,
6207
6232
bufsize, mrr_flags, cost);
6208
6233
if (rows != HA_POS_ERROR)
6214
6239
param->table->quick_key_parts[keynr]=param->max_key_part+1;
6215
6240
param->table->quick_n_ranges[keynr]= param->range_count;
6216
6241
param->table->quick_condition_rows=
6217
min(param->table->quick_condition_rows, rows);
6242
cmin(param->table->quick_condition_rows, rows);
6220
6245
/* Figure out if the key scan is ROR (returns rows in ROWID order) or not */
6221
6246
enum ha_key_alg key_alg= param->table->key_info[seq.real_keyno].algorithm;
6222
6247
if ((key_alg != HA_KEY_ALG_BTREE) && (key_alg!= HA_KEY_ALG_UNDEF))
6225
6250
All scans are non-ROR scans for those index types.
6226
TODO: Don't have this logic here, make table engines return
6251
TODO: Don't have this logic here, make table engines return
6227
6252
appropriate flags instead.
6229
6254
param->is_ror_scan= false;
6263
6288
where the index is defined on (key1_1, ..., key1_N [,a_1, ..., a_n])
6265
and the table has a clustered Primary Key defined as
6266
PRIMARY KEY(a_1, ..., a_n, b1, ..., b_k)
6268
i.e. the first key parts of it are identical to uncovered parts ot the
6290
and the table has a clustered Primary Key defined as
6291
PRIMARY KEY(a_1, ..., a_n, b1, ..., b_k)
6293
i.e. the first key parts of it are identical to uncovered parts ot the
6269
6294
key being scanned. This function assumes that the index flags do not
6270
6295
include HA_KEY_SCAN_NOT_ROR flag (that is checked elsewhere).
6276
6301
false Otherwise
6279
static bool is_key_scan_ror(PARAM *param, uint keynr, uint8_t nparts)
6304
static bool is_key_scan_ror(PARAM *param, uint32_t keynr, uint8_t nparts)
6281
6306
KEY *table_key= param->table->key_info + keynr;
6282
6307
KEY_PART_INFO *key_part= table_key->key_part + nparts;
6283
6308
KEY_PART_INFO *key_part_end= (table_key->key_part +
6284
6309
table_key->key_parts);
6287
6312
for (KEY_PART_INFO *kp= table_key->key_part; kp < key_part; kp++)
6289
6314
uint16_t fieldnr= param->table->key_info[keynr].
6340
6365
QUICK_RANGE_SELECT *
6341
get_quick_select(PARAM *param,uint idx,SEL_ARG *key_tree, uint mrr_flags,
6342
uint mrr_buf_size, MEM_ROOT *parent_alloc)
6366
get_quick_select(PARAM *param,uint32_t idx,SEL_ARG *key_tree, uint32_t mrr_flags,
6367
uint32_t mrr_buf_size, MEM_ROOT *parent_alloc)
6344
6369
QUICK_RANGE_SELECT *quick;
6345
6370
bool create_err= false;
6347
quick=new QUICK_RANGE_SELECT(param->thd, param->table,
6372
quick=new QUICK_RANGE_SELECT(param->session, param->table,
6348
6373
param->real_keynr[idx],
6349
6374
test(parent_alloc), NULL, &create_err);
6379
6404
get_quick_keys(PARAM *param,QUICK_RANGE_SELECT *quick,KEY_PART *key,
6380
SEL_ARG *key_tree, uchar *min_key,uint min_key_flag,
6381
uchar *max_key, uint max_key_flag)
6405
SEL_ARG *key_tree, unsigned char *min_key,uint32_t min_key_flag,
6406
unsigned char *max_key, uint32_t max_key_flag)
6383
6408
QUICK_RANGE *range;
6385
6410
int min_part= key_tree->part-1, // # of keypart values in min_key buffer
6386
6411
max_part= key_tree->part-1; // # of keypart values in max_key buffer
6391
6416
min_key,min_key_flag, max_key, max_key_flag))
6394
uchar *tmp_min_key=min_key,*tmp_max_key=max_key;
6419
unsigned char *tmp_min_key=min_key,*tmp_max_key=max_key;
6395
6420
min_part+= key_tree->store_min(key[key_tree->part].store_length,
6396
6421
&tmp_min_key,min_key_flag);
6397
6422
max_part+= key_tree->store_max(key[key_tree->part].store_length,
6412
6437
goto end; // Ugly, but efficient
6415
uint tmp_min_flag=key_tree->min_flag,tmp_max_flag=key_tree->max_flag;
6440
uint32_t tmp_min_flag=key_tree->min_flag,tmp_max_flag=key_tree->max_flag;
6416
6441
if (!tmp_min_flag)
6417
6442
min_part+= key_tree->next_key_part->store_min_key(key, &tmp_min_key,
6418
6443
&tmp_min_flag);
6476
6501
set_if_bigger(quick->max_used_key_length, range->min_length);
6477
6502
set_if_bigger(quick->max_used_key_length, range->max_length);
6478
6503
set_if_bigger(quick->used_key_parts, (uint) key_tree->part+1);
6479
if (insert_dynamic(&quick->ranges, (uchar*) &range))
6504
if (insert_dynamic(&quick->ranges, (unsigned char*) &range))
6522
6547
false Otherwise
6525
static bool null_part_in_key(KEY_PART *key_part, const uchar *key, uint length)
6550
static bool null_part_in_key(KEY_PART *key_part, const unsigned char *key, uint32_t length)
6527
for (const uchar *end=key+length ;
6552
for (const unsigned char *end=key+length ;
6529
6554
key+= key_part++->store_length)
6599
QUICK_RANGE_SELECT *get_quick_select_for_ref(THD *thd, TABLE *table,
6624
QUICK_RANGE_SELECT *get_quick_select_for_ref(Session *session, Table *table,
6600
6625
TABLE_REF *ref, ha_rows records)
6602
6627
MEM_ROOT *old_root, *alloc;
6604
6629
KEY *key_info = &table->key_info[ref->key];
6605
6630
KEY_PART *key_part;
6606
6631
QUICK_RANGE *range;
6608
6633
bool create_err= false;
6609
6634
COST_VECT cost;
6611
old_root= thd->mem_root;
6612
/* The following call may change thd->mem_root */
6613
quick= new QUICK_RANGE_SELECT(thd, table, ref->key, 0, 0, &create_err);
6636
old_root= session->mem_root;
6637
/* The following call may change session->mem_root */
6638
quick= new QUICK_RANGE_SELECT(session, table, ref->key, 0, 0, &create_err);
6614
6639
/* save mem_root set by QUICK_RANGE_SELECT constructor */
6615
alloc= thd->mem_root;
6640
alloc= session->mem_root;
6617
return back default mem_root (thd->mem_root) changed by
6642
return back default mem_root (session->mem_root) changed by
6618
6643
QUICK_RANGE_SELECT constructor
6620
thd->mem_root= old_root;
6645
session->mem_root= old_root;
6622
6647
if (!quick || create_err)
6623
6648
return 0; /* no ranges found */
6670
6695
make_prev_keypart_map(ref->key_parts), EQ_RANGE)))
6672
6697
*ref->null_ref_key= 0; // Clear null byte
6673
if (insert_dynamic(&quick->ranges,(uchar*)&null_range))
6698
if (insert_dynamic(&quick->ranges,(unsigned char*)&null_range))
6677
6702
/* Call multi_range_read_info() to get the MRR flags and buffer size */
6678
quick->mrr_flags= HA_MRR_NO_ASSOCIATION |
6703
quick->mrr_flags= HA_MRR_NO_ASSOCIATION |
6679
6704
(table->key_read ? HA_MRR_INDEX_ONLY : 0);
6680
if (thd->lex->sql_command != SQLCOM_SELECT)
6705
if (session->lex->sql_command != SQLCOM_SELECT)
6681
6706
quick->mrr_flags |= HA_MRR_USE_DEFAULT_IMPL;
6683
quick->mrr_buf_size= thd->variables.read_rnd_buff_size;
6708
quick->mrr_buf_size= session->variables.read_rnd_buff_size;
6684
6709
if (table->file->multi_range_read_info(quick->index, 1, (uint)records,
6685
6710
&quick->mrr_buf_size,
6686
6711
&quick->mrr_flags, &cost))
6697
Perform key scans for all used indexes (except CPK), get rowids and merge
6722
Perform key scans for all used indexes (except CPK), get rowids and merge
6698
6723
them into an ordered non-recurrent sequence of rowids.
6700
6725
The merge/duplicate removal is performed using Unique class. We put all
6701
6726
rowids into Unique, get the sorted sequence and destroy the Unique.
6703
6728
If table has a clustered primary key that covers all rows (true for bdb
6704
6729
and innodb currently) and one of the index_merge scans is a scan on PK,
6705
then rows that will be retrieved by PK scan are not put into Unique and
6730
then rows that will be retrieved by PK scan are not put into Unique and
6706
6731
primary key scan is not performed here, it is performed later separately.
6724
6749
cur_quick_it.rewind();
6725
6750
cur_quick= cur_quick_it++;
6726
6751
assert(cur_quick != 0);
6729
We reuse the same instance of handler so we need to call both init and
6754
We reuse the same instance of handler so we need to call both init and
6732
6757
if (cur_quick->init() || cur_quick->reset())
6735
6760
unique= new Unique(refpos_order_cmp, (void *)file,
6736
6761
file->ref_length,
6737
thd->variables.sortbuff_size);
6762
session->variables.sortbuff_size);
6742
6767
while ((result= cur_quick->get_next()) == HA_ERR_END_OF_FILE)
6783
6808
/* index_merge currently doesn't support "using index" at all */
6784
6809
file->extra(HA_EXTRA_NO_KEYREAD);
6785
6810
/* start table scan */
6786
init_read_record(&read_record, thd, head, (SQL_SELECT*) 0, 1, 1);
6811
init_read_record(&read_record, session, head, (SQL_SELECT*) 0, 1, 1);
7014
7039
if (!mrr_buf_desc)
7015
7040
empty_buf.buffer= empty_buf.buffer_end= empty_buf.end_of_used_area= NULL;
7018
7043
mrr_flags |= HA_MRR_SORTED;
7019
7044
RANGE_SEQ_IF seq_funcs= {quick_range_seq_init, quick_range_seq_next};
7020
7045
error= file->multi_range_read_init(&seq_funcs, (void*)this, ranges.elements,
7021
mrr_flags, mrr_buf_desc? mrr_buf_desc:
7046
mrr_flags, mrr_buf_desc? mrr_buf_desc:
7028
7053
Range sequence interface implementation for array<QUICK_RANGE>: initialize
7031
7056
quick_range_seq_init()
7032
7057
init_param Caller-opaque paramenter: QUICK_RANGE_SELECT* pointer
7033
7058
n_ranges Number of ranges in the sequence (ignored)
7034
flags MRR flags (currently not used)
7059
flags MRR flags (currently not used)
7037
7062
Opaque value to be passed to quick_range_seq_next
7040
range_seq_t quick_range_seq_init(void *init_param,
7041
uint n_ranges __attribute__((unused)),
7042
uint flags __attribute__((unused)))
7065
range_seq_t quick_range_seq_init(void *init_param, uint32_t, uint32_t)
7044
7067
QUICK_RANGE_SELECT *quick= (QUICK_RANGE_SELECT*)init_param;
7045
7068
quick->qr_traversal_ctx.first= (QUICK_RANGE**)quick->ranges.buffer;
7108
7131
function returns a reference to the "range_flag" associated with the
7109
7132
range number idx.
7111
This function should be removed when we get a proper MRR/NDB
7134
This function should be removed when we get a proper MRR/NDB
7112
7135
implementation.
7115
7138
Reference to range_flag associated with range number #idx
7118
uint16_t &mrr_persistent_flag_storage(range_seq_t seq, uint idx)
7141
uint16_t &mrr_persistent_flag_storage(range_seq_t seq, uint32_t idx)
7120
7143
QUICK_RANGE_SEQ_CTX *ctx= (QUICK_RANGE_SEQ_CTX*)seq;
7121
7144
return ctx->first[idx]->flag;
7221
7243
other if some error occurred
7224
int QUICK_RANGE_SELECT::get_next_prefix(uint prefix_length,
7246
int QUICK_RANGE_SELECT::get_next_prefix(uint32_t prefix_length,
7225
7247
key_part_map keypart_map,
7248
unsigned char *cur_prefix)
7236
7258
result= file->index_read_map(record, cur_prefix, keypart_map,
7237
7259
HA_READ_AFTER_KEY);
7238
7260
if (result || (file->compare_key(file->end_range) <= 0))
7242
uint count= ranges.elements - (cur_range - (QUICK_RANGE**) ranges.buffer);
7264
uint32_t count= ranges.elements - (cur_range - (QUICK_RANGE**) ranges.buffer);
7243
7265
if (count == 0)
7245
7267
/* Ranges have already been used up before. None is left for read. */
7247
return(HA_ERR_END_OF_FILE);
7269
return HA_ERR_END_OF_FILE;
7249
7271
last_range= *(cur_range++);
7251
start_key.key= (const uchar*) last_range->min_key;
7252
start_key.length= min(last_range->min_length, (uint16_t)prefix_length);
7273
start_key.key= (const unsigned char*) last_range->min_key;
7274
start_key.length= cmin(last_range->min_length, (uint16_t)prefix_length);
7253
7275
start_key.keypart_map= last_range->min_keypart_map & keypart_map;
7254
7276
start_key.flag= ((last_range->flag & NEAR_MIN) ? HA_READ_AFTER_KEY :
7255
7277
(last_range->flag & EQ_RANGE) ?
7256
7278
HA_READ_KEY_EXACT : HA_READ_KEY_OR_NEXT);
7257
end_key.key= (const uchar*) last_range->max_key;
7258
end_key.length= min(last_range->max_length, (uint16_t)prefix_length);
7279
end_key.key= (const unsigned char*) last_range->max_key;
7280
end_key.length= cmin(last_range->max_length, (uint16_t)prefix_length);
7259
7281
end_key.keypart_map= last_range->max_keypart_map & keypart_map;
7261
7283
We use READ_AFTER_KEY here because if we are reading on a key
7328
7350
for now, this seems to work right at least.
7331
QUICK_SELECT_DESC::QUICK_SELECT_DESC(QUICK_RANGE_SELECT *q,
7332
uint used_key_parts_arg __attribute__((unused)),
7333
bool *create_err __attribute__((unused)))
7353
QUICK_SELECT_DESC::QUICK_SELECT_DESC(QUICK_RANGE_SELECT *q, uint32_t, bool *)
7334
7354
:QUICK_RANGE_SELECT(*q), rev_it(rev_ranges)
7336
7356
QUICK_RANGE *r;
7380
7400
if (cmp_prev(*rev_it.ref()) == 0)
7383
7403
else if (result != HA_ERR_END_OF_FILE)
7387
7407
if (!(last_range= rev_it++))
7388
return(HA_ERR_END_OF_FILE); // All ranges used
7408
return HA_ERR_END_OF_FILE; // All ranges used
7390
7410
if (last_range->flag & NO_MAX_RANGE) // Read last record
7444
7464
return 0; /* key can't be to large */
7446
7466
KEY_PART *key_part=key_parts;
7467
uint32_t store_length;
7449
for (uchar *key=range_arg->max_key, *end=key+range_arg->max_length;
7469
for (unsigned char *key=range_arg->max_key, *end=key+range_arg->max_length;
7451
7471
key+= store_length, key_part++)
7679
7699
* Implementation of QUICK_GROUP_MIN_MAX_SELECT
7680
7700
*******************************************************************************/
7682
static inline uint get_field_keypart(KEY *index, Field *field);
7683
static inline SEL_ARG * get_index_range_tree(uint index, SEL_TREE* range_tree,
7684
PARAM *param, uint *param_idx);
7702
static inline uint32_t get_field_keypart(KEY *index, Field *field);
7703
static inline SEL_ARG * get_index_range_tree(uint32_t index, SEL_TREE* range_tree,
7704
PARAM *param, uint32_t *param_idx);
7685
7705
static bool get_constant_key_infix(KEY *index_info, SEL_ARG *index_range_tree,
7686
7706
KEY_PART_INFO *first_non_group_part,
7687
7707
KEY_PART_INFO *min_max_arg_part,
7688
KEY_PART_INFO *last_part, THD *thd,
7689
uchar *key_infix, uint *key_infix_len,
7708
KEY_PART_INFO *last_part, Session *session,
7709
unsigned char *key_infix, uint32_t *key_infix_len,
7690
7710
KEY_PART_INFO **first_non_infix_part);
7692
7712
check_group_min_max_predicates(COND *cond, Item_field *min_max_arg_item,
7693
7713
Field::imagetype image_type);
7696
cost_group_min_max(TABLE* table, KEY *index_info, uint used_key_parts,
7697
uint group_key_parts, SEL_TREE *range_tree,
7716
cost_group_min_max(Table* table, KEY *index_info, uint32_t used_key_parts,
7717
uint32_t group_key_parts, SEL_TREE *range_tree,
7698
7718
SEL_ARG *index_tree, ha_rows quick_prefix_records,
7699
7719
bool have_min, bool have_max,
7700
7720
double *read_cost, ha_rows *records);
7728
7748
- NGA = QA - (GA union C) = {NG_1, ..., NG_m} - the ones not in
7729
7749
GROUP BY and not referenced by MIN/MAX functions.
7730
7750
with the following properties specified below.
7731
B3. If Q has a GROUP BY WITH ROLLUP clause the access method is not
7751
B3. If Q has a GROUP BY WITH ROLLUP clause the access method is not
7734
7754
SA1. There is at most one attribute in SA referenced by any number of
7816
7836
other field as in: "select min(a) from t1 group by a" ?
7817
7837
- We assume that the general correctness of the GROUP-BY query was checked
7818
7838
before this point. Is this correct, or do we have to check it completely?
7819
- Lift the limitation in condition (B3), that is, make this access method
7839
- Lift the limitation in condition (B3), that is, make this access method
7820
7840
applicable to ROLLUP queries.
7831
7851
static TRP_GROUP_MIN_MAX *
7832
7852
get_best_group_min_max(PARAM *param, SEL_TREE *tree)
7834
THD *thd= param->thd;
7835
JOIN *join= thd->lex->current_select->join;
7836
TABLE *table= param->table;
7854
Session *session= param->session;
7855
JOIN *join= session->lex->current_select->join;
7856
Table *table= param->table;
7837
7857
bool have_min= false; /* true if there is a MIN function. */
7838
7858
bool have_max= false; /* true if there is a MAX function. */
7839
7859
Item_field *min_max_arg_item= NULL; // The argument of all MIN/MAX functions
7840
7860
KEY_PART_INFO *min_max_arg_part= NULL; /* The corresponding keypart. */
7841
uint group_prefix_len= 0; /* Length (in bytes) of the key prefix. */
7861
uint32_t group_prefix_len= 0; /* Length (in bytes) of the key prefix. */
7842
7862
KEY *index_info= NULL; /* The index chosen for data access. */
7843
uint index= 0; /* The id of the chosen index. */
7844
uint group_key_parts= 0; // Number of index key parts in the group prefix.
7845
uint used_key_parts= 0; /* Number of index key parts used for access. */
7846
uchar key_infix[MAX_KEY_LENGTH]; /* Constants from equality predicates.*/
7847
uint key_infix_len= 0; /* Length of key_infix. */
7863
uint32_t index= 0; /* The id of the chosen index. */
7864
uint32_t group_key_parts= 0; // Number of index key parts in the group prefix.
7865
uint32_t used_key_parts= 0; /* Number of index key parts used for access. */
7866
unsigned char key_infix[MAX_KEY_LENGTH]; /* Constants from equality predicates.*/
7867
uint32_t key_infix_len= 0; /* Length of key_infix. */
7848
7868
TRP_GROUP_MIN_MAX *read_plan= NULL; /* The eventually constructed TRP. */
7869
uint32_t key_part_nr;
7870
order_st *tmp_group;
7852
7872
Item_field *item_field;
7854
7874
/* Perform few 'cheap' tests whether this access method is applicable. */
7856
return(NULL); /* This is not a select statement. */
7876
return NULL; /* This is not a select statement. */
7857
7877
if ((join->tables != 1) || /* The query must reference one table. */
7858
7878
((!join->group_list) && /* Neither GROUP BY nor a DISTINCT query. */
7859
7879
(!join->select_distinct)) ||
7860
7880
(join->select_lex->olap == ROLLUP_TYPE)) /* Check (B3) for ROLLUP */
7862
7882
if (table->s->keys == 0) /* There are no indexes to use. */
7865
7885
/* Analyze the query in more detail. */
7866
7886
List_iterator<Item> select_items_it(join->fields_list);
7868
7888
/* Check (SA1,SA4) and store the only MIN/MAX argument - the C attribute.*/
7869
7889
if (join->make_sum_func_list(join->all_fields, join->fields_list, 1))
7871
7891
if (join->sum_funcs[0])
7873
7893
Item_sum *min_max_item;
7879
7899
else if (min_max_item->sum_func() == Item_sum::MAX_FUNC)
7880
7900
have_max= true;
7884
7904
/* The argument of MIN/MAX. */
7885
Item *expr= min_max_item->args[0]->real_item();
7905
Item *expr= min_max_item->args[0]->real_item();
7886
7906
if (expr->type() == Item::FIELD_ITEM) /* Is it an attribute? */
7888
7908
if (! min_max_arg_item)
7889
7909
min_max_arg_item= (Item_field*) expr;
7890
7910
else if (! min_max_arg_item->eq(expr, 1))
7925
7945
KEY_PART_INFO *last_part= NULL;
7926
7946
KEY_PART_INFO *first_non_group_part= NULL;
7927
7947
KEY_PART_INFO *first_non_infix_part= NULL;
7928
uint key_infix_parts= 0;
7929
uint cur_group_key_parts= 0;
7930
uint cur_group_prefix_len= 0;
7948
uint32_t key_infix_parts= 0;
7949
uint32_t cur_group_key_parts= 0;
7950
uint32_t cur_group_prefix_len= 0;
7931
7951
/* Cost-related variables for the best index so far. */
7932
7952
double best_read_cost= DBL_MAX;
7933
7953
ha_rows best_records= 0;
7934
7954
SEL_ARG *best_index_tree= NULL;
7935
7955
ha_rows best_quick_prefix_records= 0;
7936
uint best_param_idx= 0;
7956
uint32_t best_param_idx= 0;
7937
7957
double cur_read_cost= DBL_MAX;
7938
7958
ha_rows cur_records;
7939
7959
SEL_ARG *cur_index_tree= NULL;
7940
7960
ha_rows cur_quick_prefix_records= 0;
7941
uint cur_param_idx=MAX_KEY;
7961
uint32_t cur_param_idx=MAX_KEY;
7942
7962
key_map cur_used_key_parts;
7943
uint pk= param->table->s->primary_key;
7963
uint32_t pk= param->table->s->primary_key;
7945
for (uint cur_index= 0 ; cur_index_info != cur_index_info_end ;
7965
for (uint32_t cur_index= 0 ; cur_index_info != cur_index_info_end ;
7946
7966
cur_index_info++, cur_index++)
7948
7968
/* Check (B1) - if current index is covering. */
8007
8027
Check (GA2) if this is a DISTINCT query.
8008
If GA2, then Store a new ORDER object in group_fields_array at the
8009
position of the key part of item_field->field. Thus we get the ORDER
8028
If GA2, then Store a new order_st object in group_fields_array at the
8029
position of the key part of item_field->field. Thus we get the order_st
8010
8030
objects for each field ordered as the corresponding key parts.
8011
Later group_fields_array of ORDER objects is used to convert the query
8031
Later group_fields_array of order_st objects is used to convert the query
8012
8032
to a GROUP query.
8014
8034
else if (join->select_distinct)
8016
8036
select_items_it.rewind();
8017
8037
cur_used_key_parts.clear_all();
8018
uint max_key_part= 0;
8038
uint32_t max_key_part= 0;
8019
8039
while ((item= select_items_it++))
8021
8041
item_field= (Item_field*) item; /* (SA5) already checked above. */
8092
8112
SEL_ARG *index_range_tree= get_index_range_tree(cur_index, tree, param,
8094
8114
if (!get_constant_key_infix(cur_index_info, index_range_tree,
8095
8115
first_non_group_part, min_max_arg_part,
8096
last_part, thd, key_infix, &key_infix_len,
8116
last_part, session, key_infix, &key_infix_len,
8097
8117
&first_non_infix_part))
8098
8118
goto next_index;
8159
8179
&cur_param_idx);
8160
8180
/* Check if this range tree can be used for prefix retrieval. */
8161
8181
COST_VECT dummy_cost;
8162
uint mrr_flags= HA_MRR_USE_DEFAULT_IMPL;
8164
cur_quick_prefix_records= check_quick_select(param, cur_param_idx,
8165
false /*don't care*/,
8182
uint32_t mrr_flags= HA_MRR_USE_DEFAULT_IMPL;
8183
uint32_t mrr_bufsize=0;
8184
cur_quick_prefix_records= check_quick_select(param, cur_param_idx,
8185
false /*don't care*/,
8166
8186
cur_index_tree, true,
8167
8187
&mrr_flags, &mrr_bufsize,
8195
8215
cur_group_prefix_len= 0;
8197
8217
if (!index_info) /* No usable index found. */
8200
8220
/* Check (SA3) for the where clause. */
8201
8221
if (join->conds && min_max_arg_item &&
8202
8222
!check_group_min_max_predicates(join->conds, min_max_arg_item, Field::itRAW))
8205
8225
/* The query passes all tests, so construct a new TRP object. */
8206
8226
read_plan= new (param->mem_root)
8286
8306
Item_func *pred= (Item_func*) cond;
8287
8307
Item **arguments= pred->arguments();
8289
for (uint arg_idx= 0; arg_idx < pred->argument_count (); arg_idx++)
8309
for (uint32_t arg_idx= 0; arg_idx < pred->argument_count (); arg_idx++)
8291
8311
cur_arg= arguments[arg_idx]->real_item();
8292
8312
if (cur_arg->type() == Item::FIELD_ITEM)
8294
if (min_max_arg_item->eq(cur_arg, 1))
8314
if (min_max_arg_item->eq(cur_arg, 1))
8297
8317
If pred references the MIN/MAX argument, check whether pred is a range
8336
8356
(args[1]->result_type() != STRING_RESULT &&
8337
8357
min_max_arg_item->field->cmp_type() != args[1]->result_type())))
8341
8361
else if (cur_arg->type() == Item::FUNC_ITEM)
8343
8363
if (!check_group_min_max_predicates(cur_arg, min_max_arg_item,
8347
8367
else if (cur_arg->const_item())
8366
8386
first_non_group_part [in] First index part after group attribute parts
8367
8387
min_max_arg_part [in] The keypart of the MIN/MAX argument if any
8368
8388
last_part [in] Last keypart of the index
8369
thd [in] Current thread
8389
session [in] Current thread
8370
8390
key_infix [out] Infix of constants to be used for index lookup
8371
8391
key_infix_len [out] Lenghth of the infix
8372
8392
first_non_infix_part [out] The first keypart after the infix (if any)
8375
8395
Test conditions (NGA1, NGA2) from get_best_group_min_max(). Namely,
8376
8396
for each keypart field NGF_i not in GROUP-BY, check that there is a
8391
get_constant_key_infix(KEY *index_info __attribute__((unused)),
8392
SEL_ARG *index_range_tree,
8411
get_constant_key_infix(KEY *, SEL_ARG *index_range_tree,
8393
8412
KEY_PART_INFO *first_non_group_part,
8394
8413
KEY_PART_INFO *min_max_arg_part,
8395
8414
KEY_PART_INFO *last_part,
8396
THD *thd __attribute__((unused)),
8397
uchar *key_infix, uint *key_infix_len,
8415
Session *, unsigned char *key_infix, uint32_t *key_infix_len,
8398
8416
KEY_PART_INFO **first_non_infix_part)
8400
8418
SEL_ARG *cur_range;
8435
8453
(cur_range->min_flag & NEAR_MIN) || (cur_range->max_flag & NEAR_MAX))
8438
uint field_length= cur_part->store_length;
8456
uint32_t field_length= cur_part->store_length;
8439
8457
if ((cur_range->maybe_null &&
8440
8458
cur_range->min_value[0] && cur_range->max_value[0]) ||
8441
8459
!memcmp(cur_range->min_value, cur_range->max_value, field_length))
8510
8528
Pointer to the SEL_ARG subtree that corresponds to index.
8513
SEL_ARG * get_index_range_tree(uint index, SEL_TREE* range_tree, PARAM *param,
8531
SEL_ARG * get_index_range_tree(uint32_t index, SEL_TREE* range_tree, PARAM *param,
8532
uint32_t *param_idx)
8516
uint idx= 0; /* Index nr in param->key_parts */
8534
uint32_t idx= 0; /* Index nr in param->key_parts */
8517
8535
while (idx < param->keys)
8519
8537
if (index == param->real_keynr[idx])
8588
void cost_group_min_max(TABLE* table, KEY *index_info, uint used_key_parts,
8589
uint group_key_parts, SEL_TREE *range_tree,
8590
SEL_ARG *index_tree __attribute__((unused)),
8591
ha_rows quick_prefix_records,
8606
void cost_group_min_max(Table* table, KEY *index_info, uint32_t used_key_parts,
8607
uint32_t group_key_parts, SEL_TREE *range_tree,
8608
SEL_ARG *, ha_rows quick_prefix_records,
8592
8609
bool have_min, bool have_max,
8593
8610
double *read_cost, ha_rows *records)
8595
8612
ha_rows table_records;
8598
uint keys_per_block;
8599
uint keys_per_group;
8600
uint keys_per_subgroup; /* Average number of keys in sub-groups */
8613
uint32_t num_groups;
8614
uint32_t num_blocks;
8615
uint32_t keys_per_block;
8616
uint32_t keys_per_group;
8617
uint32_t keys_per_subgroup; /* Average number of keys in sub-groups */
8601
8618
/* formed by a key infix. */
8602
8619
double p_overlap; /* Probability that a sub-group overlaps two blocks. */
8603
8620
double quick_prefix_selectivity;
8639
8656
double blocks_per_group= (double) num_blocks / (double) num_groups;
8640
8657
p_overlap= (blocks_per_group * (keys_per_subgroup - 1)) / keys_per_group;
8641
p_overlap= min(p_overlap, 1.0);
8658
p_overlap= cmin(p_overlap, 1.0);
8643
io_cost= (double) min(num_groups * (1 + p_overlap), (double)num_blocks);
8660
io_cost= (double) cmin(num_groups * (1 + p_overlap), (double)num_blocks);
8646
8663
io_cost= (keys_per_group > keys_per_block) ?
8685
8701
QUICK_SELECT_I *
8686
TRP_GROUP_MIN_MAX::make_quick(PARAM *param,
8687
bool retrieve_full_rows __attribute__((unused)),
8688
MEM_ROOT *parent_alloc)
8702
TRP_GROUP_MIN_MAX::make_quick(PARAM *param, bool, MEM_ROOT *parent_alloc)
8690
8704
QUICK_GROUP_MIN_MAX_SELECT *quick;
8692
8706
quick= new QUICK_GROUP_MIN_MAX_SELECT(param->table,
8693
param->thd->lex->current_select->join,
8707
param->session->lex->current_select->join,
8694
8708
have_min, have_max, min_max_arg_part,
8695
8709
group_prefix_len, group_key_parts,
8696
8710
used_key_parts, index_info, index,
8697
8711
read_cost, records, key_infix_len,
8698
8712
key_infix, parent_alloc);
8702
8716
if (quick->init())
8708
8722
if (range_tree)
8784
8798
QUICK_GROUP_MIN_MAX_SELECT::
8785
QUICK_GROUP_MIN_MAX_SELECT(TABLE *table, JOIN *join_arg, bool have_min_arg,
8799
QUICK_GROUP_MIN_MAX_SELECT(Table *table, JOIN *join_arg, bool have_min_arg,
8786
8800
bool have_max_arg,
8787
8801
KEY_PART_INFO *min_max_arg_part_arg,
8788
uint group_prefix_len_arg, uint group_key_parts_arg,
8789
uint used_key_parts_arg, KEY *index_info_arg,
8790
uint use_index, double read_cost_arg,
8791
ha_rows records_arg, uint key_infix_len_arg,
8792
uchar *key_infix_arg, MEM_ROOT *parent_alloc)
8802
uint32_t group_prefix_len_arg, uint32_t group_key_parts_arg,
8803
uint32_t used_key_parts_arg, KEY *index_info_arg,
8804
uint32_t use_index, double read_cost_arg,
8805
ha_rows records_arg, uint32_t key_infix_len_arg,
8806
unsigned char *key_infix_arg, MEM_ROOT *parent_alloc)
8793
8807
:join(join_arg), index_info(index_info_arg),
8794
8808
group_prefix_len(group_prefix_len_arg),
8795
8809
group_key_parts(group_key_parts_arg), have_min(have_min_arg),
8818
8832
assert(!parent_alloc);
8819
8833
if (!parent_alloc)
8821
init_sql_alloc(&alloc, join->thd->variables.range_alloc_block_size, 0);
8822
join->thd->mem_root= &alloc;
8835
init_sql_alloc(&alloc, join->session->variables.range_alloc_block_size, 0);
8836
join->session->mem_root= &alloc;
8825
8839
memset(&alloc, 0, sizeof(MEM_ROOT)); // ensure that it's not used
8848
8862
if (group_prefix) /* Already initialized. */
8851
if (!(last_prefix= (uchar*) alloc_root(&alloc, group_prefix_len)))
8865
if (!(last_prefix= (unsigned char*) alloc_root(&alloc, group_prefix_len)))
8854
8868
We may use group_prefix to store keys with all select fields, so allocate
8855
8869
enough space for it.
8857
if (!(group_prefix= (uchar*) alloc_root(&alloc,
8871
if (!(group_prefix= (unsigned char*) alloc_root(&alloc,
8858
8872
real_prefix_len + min_max_arg_len)))
8864
8878
The memory location pointed to by key_infix will be deleted soon, so
8865
8879
allocate a new buffer and copy the key_infix into it.
8867
uchar *tmp_key_infix= (uchar*) alloc_root(&alloc, key_infix_len);
8881
unsigned char *tmp_key_infix= (unsigned char*) alloc_root(&alloc, key_infix_len);
8868
8882
if (!tmp_key_infix)
8870
8884
memcpy(tmp_key_infix, this->key_infix, key_infix_len);
8955
8968
bool QUICK_GROUP_MIN_MAX_SELECT::add_range(SEL_ARG *sel_range)
8957
8970
QUICK_RANGE *range;
8958
uint range_flag= sel_range->min_flag | sel_range->max_flag;
8971
uint32_t range_flag= sel_range->min_flag | sel_range->max_flag;
8960
8973
/* Skip (-inf,+inf) ranges, e.g. (x < 5 or x > 4). */
8961
8974
if ((range_flag & NO_MIN_RANGE) && (range_flag & NO_MAX_RANGE))
8995
9008
quick_prefix_select is made over the conditions on the whole key.
8996
It defines a number of ranges of length x.
8997
However when jumping through the prefixes we use only the the first
9009
It defines a number of ranges of length x.
9010
However when jumping through the prefixes we use only the the first
8998
9011
few most significant keyparts in the range key. However if there
8999
are more keyparts to follow the ones we are using we must make the
9000
condition on the key inclusive (because x < "ab" means
9012
are more keyparts to follow the ones we are using we must make the
9013
condition on the key inclusive (because x < "ab" means
9001
9014
x[0] < 'a' OR (x[0] == 'a' AND x[1] < 'b').
9002
9015
To achive the above we must turn off the NEAR_MIN/NEAR_MAX
9007
9020
group_prefix_len < quick_prefix_select->max_used_key_length)
9009
9022
DYNAMIC_ARRAY *arr;
9012
9025
for (inx= 0, arr= &quick_prefix_select->ranges; inx < arr->elements; inx++)
9014
9027
QUICK_RANGE *range;
9016
get_dynamic(arr, (uchar*)&range, inx);
9029
get_dynamic(arr, (unsigned char*)&range, inx);
9017
9030
range->flag &= ~(NEAR_MIN | NEAR_MAX);
9049
9062
QUICK_RANGE *cur_range;
9051
9064
{ /* Check if the right-most range has a lower boundary. */
9052
get_dynamic(&min_max_ranges, (uchar*)&cur_range,
9065
get_dynamic(&min_max_ranges, (unsigned char*)&cur_range,
9053
9066
min_max_ranges.elements - 1);
9054
9067
if (!(cur_range->flag & NO_MIN_RANGE))
9062
9075
{ /* Check if the left-most range has an upper boundary. */
9063
get_dynamic(&min_max_ranges, (uchar*)&cur_range, 0);
9076
get_dynamic(&min_max_ranges, (unsigned char*)&cur_range, 0);
9064
9077
if (!(cur_range->flag & NO_MAX_RANGE))
9066
9079
max_used_key_length+= min_max_arg_len;
9108
9121
file->extra(HA_EXTRA_KEYREAD); /* We need only the key attributes */
9109
9122
if ((result= file->ha_index_init(index,1)))
9111
9124
if (quick_prefix_select && quick_prefix_select->reset())
9113
9126
result= file->index_last(record);
9114
9127
if (result == HA_ERR_END_OF_FILE)
9116
9129
/* Save the prefix of the last group. */
9117
9130
key_copy(last_prefix, record, index_info, group_prefix_len);
9125
9138
Get the next key containing the MIN and/or MAX key for the next group.
9193
9206
if (max_res == 0)
9194
9207
update_max_result();
9195
9208
/* If a MIN was found, a MAX must have been found as well. */
9196
assert((have_max && !have_min) ||
9197
(have_max && have_min && (max_res == 0)));
9209
assert(((have_max && !have_min) ||
9210
(have_max && have_min && (max_res == 0))));
9200
9213
If this is just a GROUP BY or DISTINCT without MIN or MAX and there
9436
9449
QUICK_RANGE *cur_range;
9437
9450
bool found_null= false;
9438
9451
int result= HA_ERR_KEY_NOT_FOUND;
9452
basic_string<unsigned char> max_key;
9454
max_key.reserve(real_prefix_len + min_max_arg_len);
9440
9456
assert(min_max_ranges.elements > 0);
9442
for (uint range_idx= 0; range_idx < min_max_ranges.elements; range_idx++)
9458
for (uint32_t range_idx= 0; range_idx < min_max_ranges.elements; range_idx++)
9443
9459
{ /* Search from the left-most range to the right. */
9444
get_dynamic(&min_max_ranges, (uchar*)&cur_range, range_idx);
9460
get_dynamic(&min_max_ranges, (unsigned char*)&cur_range, range_idx);
9447
9463
If the current value for the min/max argument is bigger than the right
9448
9464
boundary of cur_range, there is no need to check this range.
9450
9466
if (range_idx != 0 && !(cur_range->flag & NO_MAX_RANGE) &&
9451
(key_cmp(min_max_arg_part, (const uchar*) cur_range->max_key,
9467
(key_cmp(min_max_arg_part, (const unsigned char*) cur_range->max_key,
9452
9468
min_max_arg_len) == 1))
9509
9525
if ( !(cur_range->flag & NO_MAX_RANGE) )
9511
9527
/* Compose the MAX key for the range. */
9512
uchar *max_key= (uchar*) my_alloca(real_prefix_len + min_max_arg_len);
9513
memcpy(max_key, group_prefix, real_prefix_len);
9514
memcpy(max_key + real_prefix_len, cur_range->max_key,
9515
cur_range->max_length);
9529
max_key.append(group_prefix, real_prefix_len);
9530
max_key.append(cur_range->max_key, cur_range->max_length);
9516
9531
/* Compare the found key with max_key. */
9517
int cmp_res= key_cmp(index_info->key_part, max_key,
9532
int cmp_res= key_cmp(index_info->key_part,
9518
9534
real_prefix_len + min_max_arg_len);
9519
9535
if ((!((cur_range->flag & NEAR_MAX) && (cmp_res == -1)) || (cmp_res <= 0)))
9567
9583
key_part_map keypart_map;
9568
9584
QUICK_RANGE *cur_range;
9586
basic_string<unsigned char> min_key;
9587
min_key.reserve(real_prefix_len + min_max_arg_len);
9571
9589
assert(min_max_ranges.elements > 0);
9573
for (uint range_idx= min_max_ranges.elements; range_idx > 0; range_idx--)
9591
for (uint32_t range_idx= min_max_ranges.elements; range_idx > 0; range_idx--)
9574
9592
{ /* Search from the right-most range to the left. */
9575
get_dynamic(&min_max_ranges, (uchar*)&cur_range, range_idx - 1);
9593
get_dynamic(&min_max_ranges, (unsigned char*)&cur_range, range_idx - 1);
9578
9596
If the current value for the min/max argument is smaller than the left
9626
9644
if ( !(cur_range->flag & NO_MIN_RANGE) )
9628
9646
/* Compose the MIN key for the range. */
9629
uchar *min_key= (uchar*) my_alloca(real_prefix_len + min_max_arg_len);
9630
memcpy(min_key, group_prefix, real_prefix_len);
9631
memcpy(min_key + real_prefix_len, cur_range->min_key,
9632
cur_range->min_length);
9648
min_key.append(group_prefix, real_prefix_len);
9649
min_key.append(cur_range->min_key, cur_range->min_length);
9633
9651
/* Compare the found key with min_key. */
9634
int cmp_res= key_cmp(index_info->key_part, min_key,
9652
int cmp_res= key_cmp(index_info->key_part,
9635
9654
real_prefix_len + min_max_arg_len);
9636
9655
if ((!((cur_range->flag & NEAR_MIN) && (cmp_res == 1)) ||
9637
9656
(cmp_res >= 0)))
9728
9747
String *used_lengths)
9732
9751
key_names->append(index_info->name);
9733
9752
length= int64_t2str(max_used_key_length, buf, 10) - buf;
9734
9753
used_lengths->append(buf, length);
9737
static void print_sel_tree(PARAM *param, SEL_TREE *tree, key_map *tree_map,
9738
const char *msg __attribute__((unused)))
9756
static void print_sel_tree(PARAM *param, SEL_TREE *tree, key_map *tree_map, const char *)
9740
9758
SEL_ARG **key,**end;
9758
9776
if (!tmp.length())
9759
9777
tmp.append(STRING_WITH_LEN("(empty)"));
9765
static void print_ror_scans_arr(TABLE *table,
9766
const char *msg __attribute__((unused)),
9767
struct st_ror_scan_info **start,
9781
static void print_ror_scans_arr(Table *table,
9782
const char *, struct st_ror_scan_info **start,
9768
9783
struct st_ror_scan_info **end)
9770
9785
char buff[1024];