152
131
return static_cast<ha_rows>(x);
134
extern "C" int refpos_order_cmp(void* arg, const void *a,const void *b)
136
handler *file= (handler*)arg;
137
return file->cmp_ref((const unsigned char*)a, (const unsigned char*)b);
140
static int sel_cmp(Field *f,unsigned char *a,unsigned char *b,uint8_t a_flag,uint8_t b_flag);
155
142
static unsigned char is_null_string[2]= {1,0};
159
Get cost of reading nrows table records in a "disk sweep"
161
A disk sweep read is a sequence of Cursor->rnd_pos(rowid) calls that made
162
for an ordered sequence of rowids.
164
We assume hard disk IO. The read is performed as follows:
166
1. The disk head is moved to the needed cylinder
167
2. The controller waits for the plate to rotate
168
3. The data is transferred
170
Time to do #3 is insignificant compared to #2+#1.
172
Time to move the disk head is proportional to head travel distance.
174
Time to wait for the plate to rotate depends on whether the disk head
177
If disk head wasn't moved, the wait time is proportional to distance
178
between the previous block and the block we're reading.
180
If the head was moved, we don't know how much we'll need to wait for the
181
plate to rotate. We assume the wait time to be a variate with a mean of
182
0.5 of full rotation time.
184
Our cost units are "random disk seeks". The cost of random disk seek is
185
actually not a constant, it depends one range of cylinders we're going
186
to access. We make it constant by introducing a fuzzy concept of "typical
187
datafile length" (it's fuzzy as it's hard to tell whether it should
188
include index cursor, temp.tables etc). Then random seek cost is:
190
1 = half_rotation_cost + move_cost * 1/3 * typical_data_file_length
192
We define half_rotation_cost as DISK_SEEK_BASE_COST=0.9.
194
@param table Table to be accessed
195
@param nrows Number of rows to retrieve
196
@param interrupted true <=> Assume that the disk sweep will be
197
interrupted by other disk IO. false - otherwise.
198
@param cost OUT The cost.
144
class RANGE_OPT_PARAM;
146
A construction block of the SEL_ARG-graph.
148
The following description only covers graphs of SEL_ARG objects with
149
sel_arg->type==KEY_RANGE:
151
One SEL_ARG object represents an "elementary interval" in form
153
min_value <=? table.keypartX <=? max_value
155
The interval is a non-empty interval of any kind: with[out] minimum/maximum
156
bound, [half]open/closed, single-point interval, etc.
158
1. SEL_ARG GRAPH STRUCTURE
160
SEL_ARG objects are linked together in a graph. The meaning of the graph
161
is better demostrated by an example:
166
| part=1 $ part=2 $ part=3
168
| +-------+ $ +-------+ $ +--------+
169
| | kp1<1 |--$-->| kp2=5 |--$-->| kp3=10 |
170
| +-------+ $ +-------+ $ +--------+
176
\->| kp1=2 |--$--------------$-+
177
+-------+ $ $ | +--------+
179
+-------+ $ $ | +--------+
180
| kp1=3 |--$--------------$-+ |
181
+-------+ $ $ +--------+
185
The entire graph is partitioned into "interval lists".
187
An interval list is a sequence of ordered disjoint intervals over the same
188
key part. SEL_ARG are linked via "next" and "prev" pointers. Additionally,
189
all intervals in the list form an RB-tree, linked via left/right/parent
190
pointers. The RB-tree root SEL_ARG object will be further called "root of the
193
In the example pic, there are 4 interval lists:
194
"kp<1 OR kp1=2 OR kp1=3", "kp2=5", "kp3=10 OR kp3=12", "kp3=11 OR kp3=13".
195
The vertical lines represent SEL_ARG::next/prev pointers.
197
In an interval list, each member X may have SEL_ARG::next_key_part pointer
198
pointing to the root of another interval list Y. The pointed interval list
199
must cover a key part with greater number (i.e. Y->part > X->part).
201
In the example pic, the next_key_part pointers are represented by
204
2. SEL_ARG GRAPH SEMANTICS
206
It represents a condition in a special form (we don't have a name for it ATM)
207
The SEL_ARG::next/prev is "OR", and next_key_part is "AND".
209
For example, the picture represents the condition in form:
210
(kp1 < 1 AND kp2=5 AND (kp3=10 OR kp3=12)) OR
211
(kp1=2 AND (kp3=11 OR kp3=14)) OR
212
(kp1=3 AND (kp3=11 OR kp3=14))
217
Use get_mm_tree() to construct SEL_ARG graph from WHERE condition.
218
Then walk the SEL_ARG graph and get a list of dijsoint ordered key
219
intervals (i.e. intervals in form
221
(constA1, .., const1_K) < (keypart1,.., keypartK) < (constB1, .., constB_K)
223
Those intervals can be used to access the index. The uses are in:
224
- check_quick_select() - Walk the SEL_ARG graph and find an estimate of
225
how many table records are contained within all
227
- get_quick_select() - Walk the SEL_ARG, materialize the key intervals,
228
and create QUICK_RANGE_SELECT object that will
229
read records within these intervals.
231
4. SPACE COMPLEXITY NOTES
233
SEL_ARG graph is a representation of an ordered disjoint sequence of
234
intervals over the ordered set of index tuple values.
236
For multi-part keys, one can construct a WHERE expression such that its
237
list of intervals will be of combinatorial size. Here is an example:
239
(keypart1 IN (1,2, ..., n1)) AND
240
(keypart2 IN (1,2, ..., n2)) AND
241
(keypart3 IN (1,2, ..., n3))
243
For this WHERE clause the list of intervals will have n1*n2*n3 intervals
246
(keypart1, keypart2, keypart3) = (k1, k2, k3), where 1 <= k{i} <= n{i}
248
SEL_ARG graph structure aims to reduce the amount of required space by
249
"sharing" the elementary intervals when possible (the pic at the
250
beginning of this comment has examples of such sharing). The sharing may
251
prevent combinatorial blowup:
253
There are WHERE clauses that have combinatorial-size interval lists but
254
will be represented by a compact SEL_ARG graph.
256
(keypartN IN (1,2, ..., n1)) AND
258
(keypart2 IN (1,2, ..., n2)) AND
259
(keypart1 IN (1,2, ..., n3))
261
but not in all cases:
263
- There are WHERE clauses that do have a compact SEL_ARG-graph
264
representation but get_mm_tree() and its callees will construct a
265
graph of combinatorial size.
267
(keypart1 IN (1,2, ..., n1)) AND
268
(keypart2 IN (1,2, ..., n2)) AND
270
(keypartN IN (1,2, ..., n3))
272
- There are WHERE clauses for which the minimal possible SEL_ARG graph
273
representation will have combinatorial size.
275
By induction: Let's take any interval on some keypart in the middle:
279
Then let's AND it with this interval 'structure' from preceding and
282
(kp14=c1 AND kp16=c3) OR keypart14=c2) (*)
284
We will obtain this SEL_ARG graph:
288
+---------+ $ +---------+ $ +---------+
289
| kp14=c1 |--$-->| kp15=c0 |--$-->| kp16=c3 |
290
+---------+ $ +---------+ $ +---------+
292
+---------+ $ +---------+ $
293
| kp14=c2 |--$-->| kp15=c0 | $
294
+---------+ $ +---------+ $
297
Note that we had to duplicate "kp15=c0" and there was no way to avoid
299
The induction step: AND the obtained expression with another "wrapping"
301
When the process ends because of the limit on max. number of keyparts
304
WHERE clause length is O(3*#max_keyparts)
305
SEL_ARG graph size is O(2^(#max_keyparts/2))
307
(it is also possible to construct a case where instead of 2 in 2^n we
308
have a bigger constant, e.g. 4, and get a graph with 4^(31/2)= 2^31
311
We avoid consuming too much memory by setting a limit on the number of
312
SEL_ARG object we can construct during one range analysis invocation.
201
static void get_sweep_read_cost(Table *table,
204
optimizer::CostVector *cost)
207
if (table->cursor->primary_key_is_clustered())
209
cost->setIOCount(table->cursor->read_time(table->s->primary_key,
210
static_cast<uint32_t>(nrows),
216
ceil(uint64_t2double(table->cursor->stats.data_file_length) / IO_SIZE);
218
n_blocks * (1.0 - pow(1.0 - 1.0/n_blocks, rows2double(nrows)));
219
if (busy_blocks < 1.0)
222
cost->setIOCount(busy_blocks);
226
/* Assume reading is done in one 'sweep' */
227
cost->setAvgIOCost((DISK_SEEK_BASE_COST +
228
DISK_SEEK_PROP_COST*n_blocks/busy_blocks));
315
class SEL_ARG :public Sql_alloc
318
uint8_t min_flag,max_flag,maybe_flag;
319
uint8_t part; // Which key part
322
Number of children of this element in the RB-tree, plus 1 for this
327
Valid only for elements which are RB-tree roots: Number of times this
328
RB-tree is referred to (it is referred by SEL_ARG::next_key_part or by
329
SEL_TREE::keys[i] or by a temporary SEL_ARG* variable)
334
unsigned char *min_value,*max_value; // Pointer to range
337
eq_tree() requires that left == right == 0 if the type is MAYBE_KEY.
339
SEL_ARG *left,*right; /* R-B tree children */
340
SEL_ARG *next,*prev; /* Links for bi-directional interval list */
341
SEL_ARG *parent; /* R-B tree parent */
342
SEL_ARG *next_key_part;
343
enum leaf_color { BLACK,RED } color;
344
enum Type { IMPOSSIBLE, MAYBE, MAYBE_KEY, KEY_RANGE } type;
346
enum { MAX_SEL_ARGS = 16000 };
350
SEL_ARG(Field *,const unsigned char *, const unsigned char *);
351
SEL_ARG(Field *field, uint8_t part, unsigned char *min_value, unsigned char *max_value,
352
uint8_t min_flag, uint8_t max_flag, uint8_t maybe_flag);
353
SEL_ARG(enum Type type_arg)
354
:min_flag(0),elements(1),use_count(1),left(0),right(0),next_key_part(0),
355
color(BLACK), type(type_arg)
357
inline bool is_same(SEL_ARG *arg)
359
if (type != arg->type || part != arg->part)
361
if (type != KEY_RANGE)
363
return cmp_min_to_min(arg) == 0 && cmp_max_to_max(arg) == 0;
365
inline void merge_flags(SEL_ARG *arg) { maybe_flag|=arg->maybe_flag; }
366
inline void maybe_smaller() { maybe_flag=1; }
367
/* Return true iff it's a single-point null interval */
368
inline bool is_null_interval() { return maybe_null && max_value[0] == 1; }
369
inline int cmp_min_to_min(SEL_ARG* arg)
371
return sel_cmp(field,min_value, arg->min_value, min_flag, arg->min_flag);
373
inline int cmp_min_to_max(SEL_ARG* arg)
375
return sel_cmp(field,min_value, arg->max_value, min_flag, arg->max_flag);
377
inline int cmp_max_to_max(SEL_ARG* arg)
379
return sel_cmp(field,max_value, arg->max_value, max_flag, arg->max_flag);
381
inline int cmp_max_to_min(SEL_ARG* arg)
383
return sel_cmp(field,max_value, arg->min_value, max_flag, arg->min_flag);
385
SEL_ARG *clone_and(SEL_ARG* arg)
386
{ // Get overlapping range
387
unsigned char *new_min,*new_max;
388
uint8_t flag_min,flag_max;
389
if (cmp_min_to_min(arg) >= 0)
391
new_min=min_value; flag_min=min_flag;
395
new_min=arg->min_value; flag_min=arg->min_flag; /* purecov: deadcode */
397
if (cmp_max_to_max(arg) <= 0)
399
new_max=max_value; flag_max=max_flag;
403
new_max=arg->max_value; flag_max=arg->max_flag;
405
return new SEL_ARG(field, part, new_min, new_max, flag_min, flag_max,
406
test(maybe_flag && arg->maybe_flag));
408
SEL_ARG *clone_first(SEL_ARG *arg)
409
{ // min <= X < arg->min
410
return new SEL_ARG(field,part, min_value, arg->min_value,
411
min_flag, arg->min_flag & NEAR_MIN ? 0 : NEAR_MAX,
412
maybe_flag | arg->maybe_flag);
414
SEL_ARG *clone_last(SEL_ARG *arg)
415
{ // min <= X <= key_max
416
return new SEL_ARG(field, part, min_value, arg->max_value,
417
min_flag, arg->max_flag, maybe_flag | arg->maybe_flag);
419
SEL_ARG *clone(RANGE_OPT_PARAM *param, SEL_ARG *new_parent, SEL_ARG **next);
421
bool copy_min(SEL_ARG* arg)
422
{ // Get overlapping range
423
if (cmp_min_to_min(arg) > 0)
425
min_value=arg->min_value; min_flag=arg->min_flag;
426
if ((max_flag & (NO_MAX_RANGE | NO_MIN_RANGE)) ==
427
(NO_MAX_RANGE | NO_MIN_RANGE))
428
return 1; // Full range
430
maybe_flag|=arg->maybe_flag;
433
bool copy_max(SEL_ARG* arg)
434
{ // Get overlapping range
435
if (cmp_max_to_max(arg) <= 0)
437
max_value=arg->max_value; max_flag=arg->max_flag;
438
if ((max_flag & (NO_MAX_RANGE | NO_MIN_RANGE)) ==
439
(NO_MAX_RANGE | NO_MIN_RANGE))
440
return 1; // Full range
442
maybe_flag|=arg->maybe_flag;
446
void copy_min_to_min(SEL_ARG *arg)
448
min_value=arg->min_value; min_flag=arg->min_flag;
450
void copy_min_to_max(SEL_ARG *arg)
452
max_value=arg->min_value;
453
max_flag=arg->min_flag & NEAR_MIN ? 0 : NEAR_MAX;
455
void copy_max_to_min(SEL_ARG *arg)
457
min_value=arg->max_value;
458
min_flag=arg->max_flag & NEAR_MAX ? 0 : NEAR_MIN;
460
/* returns a number of keypart values (0 or 1) appended to the key buffer */
461
int store_min(uint32_t length, unsigned char **min_key,uint32_t min_key_flag)
463
/* "(kp1 > c1) AND (kp2 OP c2) AND ..." -> (kp1 > c1) */
464
if ((!(min_flag & NO_MIN_RANGE) &&
465
!(min_key_flag & (NO_MIN_RANGE | NEAR_MIN))))
467
if (maybe_null && *min_value)
470
memset(*min_key+1, 0, length-1);
473
memcpy(*min_key,min_value,length);
479
/* returns a number of keypart values (0 or 1) appended to the key buffer */
480
int store_max(uint32_t length, unsigned char **max_key, uint32_t max_key_flag)
482
if (!(max_flag & NO_MAX_RANGE) &&
483
!(max_key_flag & (NO_MAX_RANGE | NEAR_MAX)))
485
if (maybe_null && *max_value)
488
memset(*max_key+1, 0, length-1);
491
memcpy(*max_key,max_value,length);
498
/* returns a number of keypart values appended to the key buffer */
499
int store_min_key(KEY_PART *key, unsigned char **range_key, uint32_t *range_key_flag)
501
SEL_ARG *key_tree= first();
502
uint32_t res= key_tree->store_min(key[key_tree->part].store_length,
503
range_key, *range_key_flag);
504
*range_key_flag|= key_tree->min_flag;
506
if (key_tree->next_key_part &&
507
key_tree->next_key_part->part == key_tree->part+1 &&
508
!(*range_key_flag & (NO_MIN_RANGE | NEAR_MIN)) &&
509
key_tree->next_key_part->type == SEL_ARG::KEY_RANGE)
510
res+= key_tree->next_key_part->store_min_key(key, range_key,
515
/* returns a number of keypart values appended to the key buffer */
516
int store_max_key(KEY_PART *key, unsigned char **range_key, uint32_t *range_key_flag)
518
SEL_ARG *key_tree= last();
519
uint32_t res=key_tree->store_max(key[key_tree->part].store_length,
520
range_key, *range_key_flag);
521
(*range_key_flag)|= key_tree->max_flag;
522
if (key_tree->next_key_part &&
523
key_tree->next_key_part->part == key_tree->part+1 &&
524
!(*range_key_flag & (NO_MAX_RANGE | NEAR_MAX)) &&
525
key_tree->next_key_part->type == SEL_ARG::KEY_RANGE)
526
res+= key_tree->next_key_part->store_max_key(key, range_key,
531
SEL_ARG *insert(SEL_ARG *key);
532
SEL_ARG *tree_delete(SEL_ARG *key);
533
SEL_ARG *find_range(SEL_ARG *key);
534
SEL_ARG *rb_insert(SEL_ARG *leaf);
535
friend SEL_ARG *rb_delete_fixup(SEL_ARG *root,SEL_ARG *key, SEL_ARG *par);
537
friend int test_rb_tree(SEL_ARG *element,SEL_ARG *parent);
538
void test_use_count(SEL_ARG *root);
543
inline bool simple_key()
545
return !next_key_part && elements == 1;
547
void increment_use_count(long count)
551
next_key_part->use_count+=count;
552
count*= (next_key_part->use_count-count);
553
for (SEL_ARG *pos=next_key_part->first(); pos ; pos=pos->next)
554
if (pos->next_key_part)
555
pos->increment_use_count(count);
560
for (SEL_ARG *pos=first(); pos ; pos=pos->next)
561
if (pos->next_key_part)
563
pos->next_key_part->use_count--;
564
pos->next_key_part->free_tree();
568
inline SEL_ARG **parent_ptr()
570
return parent->left == this ? &parent->left : &parent->right;
575
Check if this SEL_ARG object represents a single-point interval
581
Check if this SEL_ARG object (not tree) represents a single-point
582
interval, i.e. if it represents a "keypart = const" or
586
true This SEL_ARG object represents a singlepoint interval
590
bool is_singlepoint()
593
Check for NEAR_MIN ("strictly less") and NO_MIN_RANGE (-inf < field)
594
flags, and the same for right edge.
596
if (min_flag || max_flag)
598
unsigned char *min_val= min_value;
599
unsigned char *max_val= max_value;
603
/* First byte is a NULL value indicator */
604
if (*min_val != *max_val)
608
return true; /* This "x IS NULL" */
612
return !field->key_cmp(min_val, max_val);
614
SEL_ARG *clone_tree(RANGE_OPT_PARAM *param);
620
class SEL_TREE :public Sql_alloc
624
Starting an effort to document this field:
625
(for some i, keys[i]->type == SEL_ARG::IMPOSSIBLE) =>
626
(type == SEL_TREE::IMPOSSIBLE)
628
enum Type { IMPOSSIBLE, ALWAYS, MAYBE, KEY, KEY_SMALLER } type;
629
SEL_TREE(enum Type type_arg) :type(type_arg) {}
630
SEL_TREE() :type(KEY)
633
memset(keys, 0, sizeof(keys));
636
Note: there may exist SEL_TREE objects with sel_tree->type=KEY and
637
keys[i]=0 for all i. (SergeyP: it is not clear whether there is any
638
merit in range analyzer functions (e.g. get_mm_parts) returning a
639
pointer to such SEL_TREE instead of NULL)
641
SEL_ARG *keys[MAX_KEY];
642
key_map keys_map; /* bitmask of non-NULL elements in keys */
645
Possible ways to read rows using index_merge. The list is non-empty only
646
if type==KEY. Currently can be non empty only if keys_map.none().
648
vector<SEL_IMERGE*> merges;
650
/* The members below are filled/used only after get_mm_tree is done */
651
key_map ror_scans_map; /* bitmask of ROR scan-able elements in keys */
652
uint32_t n_ror_scans; /* number of set bits in ror_scans_map */
654
struct st_ror_scan_info **ror_scans; /* list of ROR key scans */
655
struct st_ror_scan_info **ror_scans_end; /* last ROR scan */
656
/* Note that #records for each key scan is stored in table->quick_rows */
659
class RANGE_OPT_PARAM
662
Session *session; /* Current thread handle */
663
Table *table; /* Table being analyzed */
664
COND *cond; /* Used inside get_mm_tree(). */
665
table_map prev_tables;
666
table_map read_tables;
667
table_map current_table; /* Bit of the table being analyzed */
669
/* Array of parts of all keys for which range analysis is performed */
671
KEY_PART *key_parts_end;
672
MEM_ROOT *mem_root; /* Memory that will be freed when range analysis completes */
673
MEM_ROOT *old_root; /* Memory that will last until the query end */
675
Number of indexes used in range analysis (In SEL_TREE::keys only first
676
#keys elements are not empty)
681
If true, the index descriptions describe real indexes (and it is ok to
682
call field->optimize_range(real_keynr[...], ...).
683
Otherwise index description describes fake indexes.
685
bool using_real_indexes;
687
bool remove_jump_scans;
690
used_key_no -> table_key_no translation table. Only makes sense if
691
using_real_indexes==true
693
uint32_t real_keynr[MAX_KEY];
694
/* Number of SEL_ARG objects allocated by SEL_ARG::clone_tree operations */
695
uint32_t alloced_sel_args;
696
bool force_default_mrr;
699
class PARAM : public RANGE_OPT_PARAM
702
KEY_PART *key[MAX_KEY]; /* First key parts of keys used in the query */
704
uint32_t max_key_part;
705
/* Number of ranges in the last checked tree->key */
706
uint32_t range_count;
707
unsigned char min_key[MAX_KEY_LENGTH+MAX_FIELD_WIDTH],
708
max_key[MAX_KEY_LENGTH+MAX_FIELD_WIDTH];
709
bool quick; // Don't calulate possible keys
711
uint32_t fields_bitmap_size;
712
MyBitmap needed_fields; /* bitmask of fields needed by the query */
713
MyBitmap tmp_covered_fields;
715
key_map *needed_reg; /* ptr to SQL_SELECT::needed_reg */
717
uint32_t *imerge_cost_buff; /* buffer for index_merge cost estimates */
718
uint32_t imerge_cost_buff_size; /* size of the buffer */
720
/* true if last checked tree->key can be used for ROR-scan */
722
/* Number of ranges in the last checked tree->key */
726
class TABLE_READ_PLAN;
728
class TRP_ROR_INTERSECT;
730
class TRP_ROR_INDEX_MERGE;
731
class TRP_GROUP_MIN_MAX;
233
733
struct st_ror_scan_info;
235
static optimizer::SEL_TREE * get_mm_parts(optimizer::RangeParameter *param,
238
Item_func::Functype type,
240
Item_result cmp_type);
242
static optimizer::SEL_ARG *get_mm_leaf(optimizer::RangeParameter *param,
246
Item_func::Functype type,
249
static optimizer::SEL_TREE *get_mm_tree(optimizer::RangeParameter *param, COND *cond);
251
static bool is_key_scan_ror(optimizer::Parameter *param, uint32_t keynr, uint8_t nparts);
253
static ha_rows check_quick_select(optimizer::Parameter *param,
256
optimizer::SEL_ARG *tree,
257
bool update_tbl_stats,
260
optimizer::CostVector *cost);
262
static optimizer::RangeReadPlan *get_key_scans_params(optimizer::Parameter *param,
263
optimizer::SEL_TREE *tree,
264
bool index_read_must_be_used,
265
bool update_tbl_stats,
269
optimizer::RorIntersectReadPlan *get_best_ror_intersect(const optimizer::Parameter *param,
270
optimizer::SEL_TREE *tree,
272
bool *are_all_covering);
275
optimizer::RorIntersectReadPlan *get_best_covering_ror_intersect(optimizer::Parameter *param,
276
optimizer::SEL_TREE *tree,
280
optimizer::TableReadPlan *get_best_disjunct_quick(optimizer::Parameter *param,
281
optimizer::SEL_IMERGE *imerge,
285
optimizer::GroupMinMaxReadPlan *get_best_group_min_max(optimizer::Parameter *param, optimizer::SEL_TREE *tree);
287
static optimizer::SEL_TREE *tree_and(optimizer::RangeParameter *param,
288
optimizer::SEL_TREE *tree1,
289
optimizer::SEL_TREE *tree2);
291
static optimizer::SEL_ARG *sel_add(optimizer::SEL_ARG *key1, optimizer::SEL_ARG *key2);
293
static optimizer::SEL_ARG *key_and(optimizer::RangeParameter *param,
294
optimizer::SEL_ARG *key1,
295
optimizer::SEL_ARG *key2,
296
uint32_t clone_flag);
298
static bool get_range(optimizer::SEL_ARG **e1, optimizer::SEL_ARG **e2, optimizer::SEL_ARG *root1);
300
optimizer::SEL_ARG optimizer::null_element(optimizer::SEL_ARG::IMPOSSIBLE);
302
static bool null_part_in_key(KEY_PART *key_part,
303
const unsigned char *key,
735
static SEL_TREE * get_mm_parts(RANGE_OPT_PARAM *param,COND *cond_func,Field *field,
736
Item_func::Functype type,Item *value,
737
Item_result cmp_type);
738
static SEL_ARG *get_mm_leaf(RANGE_OPT_PARAM *param,COND *cond_func,Field *field,
740
Item_func::Functype type,Item *value);
741
static SEL_TREE *get_mm_tree(RANGE_OPT_PARAM *param,COND *cond);
743
static bool is_key_scan_ror(PARAM *param, uint32_t keynr, uint8_t nparts);
744
static ha_rows check_quick_select(PARAM *param, uint32_t idx, bool index_only,
745
SEL_ARG *tree, bool update_tbl_stats,
746
uint32_t *mrr_flags, uint32_t *bufsize,
748
//bool update_tbl_stats);
749
/*static ha_rows check_quick_keys(PARAM *param,uint32_t index,SEL_ARG *key_tree,
750
unsigned char *min_key, uint32_t min_key_flag, int,
751
unsigned char *max_key, uint32_t max_key_flag, int);
754
QUICK_RANGE_SELECT *get_quick_select(PARAM *param,uint32_t index,
755
SEL_ARG *key_tree, uint32_t mrr_flags,
756
uint32_t mrr_buf_size, MEM_ROOT *alloc);
757
static TRP_RANGE *get_key_scans_params(PARAM *param, SEL_TREE *tree,
758
bool index_read_must_be_used,
759
bool update_tbl_stats,
762
TRP_ROR_INTERSECT *get_best_ror_intersect(const PARAM *param, SEL_TREE *tree,
764
bool *are_all_covering);
766
TRP_ROR_INTERSECT *get_best_covering_ror_intersect(PARAM *param,
770
TABLE_READ_PLAN *get_best_disjunct_quick(PARAM *param, SEL_IMERGE *imerge,
773
TRP_GROUP_MIN_MAX *get_best_group_min_max(PARAM *param, SEL_TREE *tree);
775
static void print_sel_tree(PARAM *param, SEL_TREE *tree, key_map *tree_map,
777
static void print_ror_scans_arr(Table *table, const char *msg,
778
struct st_ror_scan_info **start,
779
struct st_ror_scan_info **end);
781
static SEL_TREE *tree_and(RANGE_OPT_PARAM *param,SEL_TREE *tree1,SEL_TREE *tree2);
782
static SEL_TREE *tree_or(RANGE_OPT_PARAM *param,SEL_TREE *tree1,SEL_TREE *tree2);
783
static SEL_ARG *sel_add(SEL_ARG *key1,SEL_ARG *key2);
784
static SEL_ARG *key_or(RANGE_OPT_PARAM *param, SEL_ARG *key1, SEL_ARG *key2);
785
static SEL_ARG *key_and(RANGE_OPT_PARAM *param, SEL_ARG *key1, SEL_ARG *key2,
786
uint32_t clone_flag);
787
static bool get_range(SEL_ARG **e1,SEL_ARG **e2,SEL_ARG *root1);
788
bool get_quick_keys(PARAM *param,QUICK_RANGE_SELECT *quick,KEY_PART *key,
789
SEL_ARG *key_tree, unsigned char *min_key,uint32_t min_key_flag,
790
unsigned char *max_key,uint32_t max_key_flag);
791
static bool eq_tree(SEL_ARG* a,SEL_ARG *b);
793
static SEL_ARG null_element(SEL_ARG::IMPOSSIBLE);
794
static bool null_part_in_key(KEY_PART *key_part, const unsigned char *key,
304
795
uint32_t length);
306
bool sel_trees_can_be_ored(optimizer::SEL_TREE *tree1,
307
optimizer::SEL_TREE *tree2,
308
optimizer::RangeParameter *param);
316
Perform AND operation on two index_merge lists and store result in *im1.
319
inline void imerge_list_and_list(List<optimizer::SEL_IMERGE> *im1, List<optimizer::SEL_IMERGE> *im2)
796
bool sel_trees_can_be_ored(SEL_TREE *tree1, SEL_TREE *tree2, RANGE_OPT_PARAM* param);
800
SEL_IMERGE is a list of possible ways to do index merge, i.e. it is
801
a condition in the following form:
802
(t_1||t_2||...||t_N) && (next)
804
where all t_i are SEL_TREEs, next is another SEL_IMERGE and no pair
805
(t_i,t_j) contains SEL_ARGS for the same index.
807
SEL_TREE contained in SEL_IMERGE always has merges=NULL.
809
This class relies on memory manager to do the cleanup.
812
class SEL_IMERGE : public Sql_alloc
814
enum { PREALLOCED_TREES= 10};
816
SEL_TREE *trees_prealloced[PREALLOCED_TREES];
817
SEL_TREE **trees; /* trees used to do index_merge */
818
SEL_TREE **trees_next; /* last of these trees */
819
SEL_TREE **trees_end; /* end of allocated space */
821
SEL_ARG ***best_keys; /* best keys to read in SEL_TREEs */
824
trees(&trees_prealloced[0]),
826
trees_end(trees + PREALLOCED_TREES)
828
int or_sel_tree(RANGE_OPT_PARAM *param, SEL_TREE *tree);
829
int or_sel_tree_with_checks(RANGE_OPT_PARAM *param, SEL_TREE *new_tree);
830
int or_sel_imerge_with_checks(RANGE_OPT_PARAM *param, SEL_IMERGE* imerge);
835
Add SEL_TREE to this index_merge without any checks,
838
This function implements the following:
839
(x_1||...||x_N) || t = (x_1||...||x_N||t), where x_i, t are SEL_TREEs
846
int SEL_IMERGE::or_sel_tree(RANGE_OPT_PARAM *param, SEL_TREE *tree)
848
if (trees_next == trees_end)
850
const int realloc_ratio= 2; /* Double size for next round */
851
uint32_t old_elements= (trees_end - trees);
852
uint32_t old_size= sizeof(SEL_TREE**) * old_elements;
853
uint32_t new_size= old_size * realloc_ratio;
854
SEL_TREE **new_trees;
855
if (!(new_trees= (SEL_TREE**)alloc_root(param->mem_root, new_size)))
857
memcpy(new_trees, trees, old_size);
859
trees_next= trees + old_elements;
860
trees_end= trees + old_elements * realloc_ratio;
862
*(trees_next++)= tree;
868
Perform OR operation on this SEL_IMERGE and supplied SEL_TREE new_tree,
869
combining new_tree with one of the trees in this SEL_IMERGE if they both
870
have SEL_ARGs for the same key.
873
or_sel_tree_with_checks()
874
param PARAM from SQL_SELECT::test_quick_select
875
new_tree SEL_TREE with type KEY or KEY_SMALLER.
878
This does the following:
879
(t_1||...||t_k)||new_tree =
881
= (t_1||...||t_k||new_tree)
883
= (t_1||....||(t_j|| new_tree)||...||t_k),
885
where t_i, y are SEL_TREEs.
886
new_tree is combined with the first t_j it has a SEL_ARG on common
887
key with. As a consequence of this, choice of keys to do index_merge
888
read may depend on the order of conditions in WHERE part of the query.
892
1 One of the trees was combined with new_tree to SEL_TREE::ALWAYS,
893
and (*this) should be discarded.
894
-1 An error occurred.
897
int SEL_IMERGE::or_sel_tree_with_checks(RANGE_OPT_PARAM *param, SEL_TREE *new_tree)
899
for (SEL_TREE** tree = trees;
903
if (sel_trees_can_be_ored(*tree, new_tree, param))
905
*tree = tree_or(param, *tree, new_tree);
908
if (((*tree)->type == SEL_TREE::MAYBE) ||
909
((*tree)->type == SEL_TREE::ALWAYS))
911
/* SEL_TREE::IMPOSSIBLE is impossible here */
916
/* New tree cannot be combined with any of existing trees. */
917
return or_sel_tree(param, new_tree);
922
Perform OR operation on this index_merge and supplied index_merge list.
926
1 - One of conditions in result is always true and this SEL_IMERGE
928
-1 - An error occurred
931
int SEL_IMERGE::or_sel_imerge_with_checks(RANGE_OPT_PARAM *param, SEL_IMERGE* imerge)
933
for (SEL_TREE** tree= imerge->trees;
934
tree != imerge->trees_next;
937
if (or_sel_tree_with_checks(param, *tree))
945
Perform AND operation on two index_merge lists and store result in im1.
948
inline void imerge_list_and_list(vector<SEL_IMERGE*> &im1, vector<SEL_IMERGE*> &im2)
950
im1.insert(im1.end(), im2.begin(), im2.end());
956
Perform OR operation on 2 index_merge lists, storing result in first list.
959
The following conversion is implemented:
960
(a_1 &&...&& a_N)||(b_1 &&...&& b_K) = AND_i,j(a_i || b_j) =>
963
i.e. all conjuncts except the first one are currently dropped.
964
This is done to avoid producing N*K ways to do index_merge.
966
If (a_1||b_1) produce a condition that is always true, NULL is returned
967
and index_merge is discarded (while it is actually possible to try
970
As a consequence of this, choice of keys to do index_merge read may depend
971
on the order of conditions in WHERE part of the query.
974
0 OK, result is stored in *im1
975
other Error, both passed lists are unusable
978
static int imerge_list_or_list(RANGE_OPT_PARAM *param,
979
vector<SEL_IMERGE*> &im1,
980
vector<SEL_IMERGE*> &im2)
982
SEL_IMERGE *imerge= im1.front();
984
im1.push_back(imerge);
986
return imerge->or_sel_imerge_with_checks(param, im2.front());
991
Perform OR operation on index_merge list and key tree.
994
false OK, result is stored in im1.
998
static bool imerge_list_or_tree(RANGE_OPT_PARAM *param,
999
vector<SEL_IMERGE*> &im1,
1002
vector<SEL_IMERGE*>::iterator imerge= im1.begin();
1004
while (imerge != im1.end())
1006
if ((*imerge)->or_sel_tree_with_checks(param, tree))
1007
imerge= im1.erase( imerge );
325
1016
/***************************************************************************
326
** Basic functions for SqlSelect and QuickRangeSelect
1017
** Basic functions for SQL_SELECT and QUICK_RANGE_SELECT
327
1018
***************************************************************************/
329
1020
/* make a select from mysql info
372
optimizer::SqlSelect::SqlSelect()
376
file(static_cast<internal::IO_CACHE *>(memory::sql_calloc(sizeof(internal::IO_CACHE)))),
1059
SQL_SELECT::SQL_SELECT() :quick(0),cond(0),free_cond(0)
380
1062
needed_reg.reset();
385
void optimizer::SqlSelect::cleanup()
1067
void SQL_SELECT::cleanup()
395
close_cached_file(file);
1077
close_cached_file(&file);
399
optimizer::SqlSelect::~SqlSelect()
1081
SQL_SELECT::~SQL_SELECT()
405
bool optimizer::SqlSelect::check_quick(Session *session,
406
bool force_quick_range,
1087
bool SQL_SELECT::check_quick(Session *session, bool force_quick_range,
411
return (test_quick_select(session,
420
bool optimizer::SqlSelect::skip_record()
422
return (cond ? cond->val_int() == 0 : 0);
426
optimizer::QuickSelectInterface::QuickSelectInterface()
428
max_used_key_length(0),
1092
return test_quick_select(session, tmp, 0, limit,
1093
force_quick_range, false) < 0;
1097
bool SQL_SELECT::skip_record()
1099
return cond ? cond->val_int() == 0 : 0;
1103
QUICK_SELECT_I::QUICK_SELECT_I()
1104
:max_used_key_length(0),
1108
QUICK_RANGE_SELECT::QUICK_RANGE_SELECT(Session *session, Table *table, uint32_t key_nr,
1109
bool no_alloc, MEM_ROOT *parent_alloc,
1111
:free_file(0),cur_range(NULL),last_range(0),dont_free(0)
1113
my_bitmap_map *bitmap;
1115
in_ror_merged_scan= 0;
1119
key_part_info= head->key_info[index].key_part;
1120
my_init_dynamic_array(&ranges, sizeof(QUICK_RANGE*), 16, 16);
1122
/* 'session' is not accessible in QUICK_RANGE_SELECT::reset(). */
1123
mrr_buf_size= session->variables.read_rnd_buff_size;
1126
if (!no_alloc && !parent_alloc)
1128
// Allocates everything through the internal memroot
1129
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1130
session->mem_root= &alloc;
1133
memset(&alloc, 0, sizeof(alloc));
1135
record= head->record[0];
1136
save_read_set= head->read_set;
1137
save_write_set= head->write_set;
1139
/* Allocate a bitmap for used columns (Q: why not on MEM_ROOT?) */
1140
if (! (bitmap= (my_bitmap_map*) malloc(head->s->column_bitmap_size)))
1142
column_bitmap.setBitmap(NULL);
1147
column_bitmap.init(bitmap, head->s->fields);
1152
int QUICK_RANGE_SELECT::init()
1154
if (file->inited != handler::NONE)
1155
file->ha_index_or_rnd_end();
1156
return(file->ha_index_init(index, 1));
1160
void QUICK_RANGE_SELECT::range_end()
1162
if (file->inited != handler::NONE)
1163
file->ha_index_or_rnd_end();
1167
QUICK_RANGE_SELECT::~QUICK_RANGE_SELECT()
1171
/* file is NULL for CPK scan on covering ROR-intersection */
1178
file->extra(HA_EXTRA_NO_KEYREAD);
1182
file->ha_external_lock(current_session, F_UNLCK);
1187
delete_dynamic(&ranges); /* ranges are allocated in alloc */
1188
free_root(&alloc,MYF(0));
1190
head->column_bitmaps_set(save_read_set, save_write_set);
1191
assert(mrr_buf_desc == NULL);
1197
QUICK_INDEX_MERGE_SELECT::QUICK_INDEX_MERGE_SELECT(Session *session_param,
1199
:pk_quick_select(NULL), session(session_param)
1203
memset(&read_record, 0, sizeof(read_record));
1204
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1208
int QUICK_INDEX_MERGE_SELECT::init()
1213
int QUICK_INDEX_MERGE_SELECT::reset()
1215
return(read_keys_and_merge());
1219
QUICK_INDEX_MERGE_SELECT::push_quick_back(QUICK_RANGE_SELECT *quick_sel_range)
1222
Save quick_select that does scan on clustered primary key as it will be
1223
processed separately.
1225
if (head->file->primary_key_is_clustered() &&
1226
quick_sel_range->index == head->s->primary_key)
1227
pk_quick_select= quick_sel_range;
1229
return quick_selects.push_back(quick_sel_range);
1233
QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT()
1235
List_iterator_fast<QUICK_RANGE_SELECT> quick_it(quick_selects);
1236
QUICK_RANGE_SELECT* quick;
1238
while ((quick= quick_it++))
1240
quick_selects.delete_elements();
1241
delete pk_quick_select;
1242
free_root(&alloc,MYF(0));
1247
QUICK_ROR_INTERSECT_SELECT::QUICK_ROR_INTERSECT_SELECT(Session *session_param,
1249
bool retrieve_full_rows,
1250
MEM_ROOT *parent_alloc)
1251
: cpk_quick(NULL), session(session_param), need_to_fetch_row(retrieve_full_rows),
1256
record= head->record[0];
1258
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1260
memset(&alloc, 0, sizeof(MEM_ROOT));
1261
last_rowid= (unsigned char*) alloc_root(parent_alloc? parent_alloc : &alloc,
1262
head->file->ref_length);
1267
Do post-constructor initialization.
1269
QUICK_ROR_INTERSECT_SELECT::init()
1276
int QUICK_ROR_INTERSECT_SELECT::init()
1278
/* Check if last_rowid was successfully allocated in ctor */
1279
return(!last_rowid);
1284
Initialize this quick select to be a ROR-merged scan.
1287
QUICK_RANGE_SELECT::init_ror_merged_scan()
1288
reuse_handler If true, use head->file, otherwise create a separate
1292
This function creates and prepares for subsequent use a separate handler
1293
object if it can't reuse head->file. The reason for this is that during
1294
ROR-merge several key scans are performed simultaneously, and a single
1295
handler is only capable of preserving context of a single key scan.
1297
In ROR-merge the quick select doing merge does full records retrieval,
1298
merged quick selects read only keys.
1301
0 ROR child scan initialized, ok to use.
1305
int QUICK_RANGE_SELECT::init_ror_merged_scan(bool reuse_handler)
1307
handler *save_file= file, *org_file;
1310
in_ror_merged_scan= 1;
1313
if (init() || reset())
1317
head->column_bitmaps_set(&column_bitmap, &column_bitmap);
1321
/* Create a separate handler object for this quick select */
1324
/* already have own 'handler' object. */
1328
session= head->in_use;
1329
if (!(file= head->file->clone(session->mem_root)))
1332
Manually set the error flag. Note: there seems to be quite a few
1333
places where a failure could cause the server to "hang" the client by
1334
sending no response to a query. ATM those are not real errors because
1335
the storage engine calls in question happen to never fail with the
1336
existing storage engines.
1338
my_error(ER_OUT_OF_RESOURCES, MYF(0)); /* purecov: inspected */
1339
/* Caller will free the memory */
1340
goto failure; /* purecov: inspected */
1343
head->column_bitmaps_set(&column_bitmap, &column_bitmap);
1345
if (file->ha_external_lock(session, F_RDLCK))
1348
if (init() || reset())
1350
file->ha_external_lock(session, F_UNLCK);
1355
last_rowid= file->ref;
1359
We are only going to read key fields and call position() on 'file'
1360
The following sets head->tmp_set to only use this key and then updates
1361
head->read_set and head->write_set to use this bitmap.
1362
The now bitmap is stored in 'column_bitmap' which is used in ::get_next()
1364
org_file= head->file;
1366
/* We don't have to set 'head->keyread' here as the 'file' is unique */
1367
if (!head->no_keyread)
1370
head->mark_columns_used_by_index(index);
1372
head->prepare_for_position();
1373
head->file= org_file;
1374
column_bitmap= *head->read_set;
1375
head->column_bitmaps_set(&column_bitmap, &column_bitmap);
1380
head->column_bitmaps_set(save_read_set, save_write_set);
1387
void QUICK_RANGE_SELECT::save_last_pos()
1389
file->position(record);
1394
Initialize this quick select to be a part of a ROR-merged scan.
1396
QUICK_ROR_INTERSECT_SELECT::init_ror_merged_scan()
1397
reuse_handler If true, use head->file, otherwise create separate
1403
int QUICK_ROR_INTERSECT_SELECT::init_ror_merged_scan(bool reuse_handler)
1405
List_iterator_fast<QUICK_RANGE_SELECT> quick_it(quick_selects);
1406
QUICK_RANGE_SELECT* quick;
1408
/* Initialize all merged "children" quick selects */
1409
assert(!need_to_fetch_row || reuse_handler);
1410
if (!need_to_fetch_row && reuse_handler)
1414
There is no use of this->file. Use it for the first of merged range
1417
if (quick->init_ror_merged_scan(true))
1419
quick->file->extra(HA_EXTRA_KEYREAD_PRESERVE_FIELDS);
1421
while ((quick= quick_it++))
1423
if (quick->init_ror_merged_scan(false))
1425
quick->file->extra(HA_EXTRA_KEYREAD_PRESERVE_FIELDS);
1426
/* All merged scans share the same record buffer in intersection. */
1427
quick->record= head->record[0];
1430
if (need_to_fetch_row && head->file->ha_rnd_init(1))
1439
Initialize quick select for row retrieval.
1447
int QUICK_ROR_INTERSECT_SELECT::reset()
1449
if (!scans_inited && init_ror_merged_scan(true))
1452
List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
1453
QUICK_RANGE_SELECT *quick;
1454
while ((quick= it++))
1461
Add a merged quick select to this ROR-intersection quick select.
1464
QUICK_ROR_INTERSECT_SELECT::push_quick_back()
1465
quick Quick select to be added. The quick select must return
1466
rows in rowid order.
1468
This call can only be made before init() is called.
1476
QUICK_ROR_INTERSECT_SELECT::push_quick_back(QUICK_RANGE_SELECT *quick)
1478
return quick_selects.push_back(quick);
1481
QUICK_ROR_INTERSECT_SELECT::~QUICK_ROR_INTERSECT_SELECT()
1483
quick_selects.delete_elements();
1485
free_root(&alloc,MYF(0));
1486
if (need_to_fetch_row && head->file->inited != handler::NONE)
1487
head->file->ha_rnd_end();
1492
QUICK_ROR_UNION_SELECT::QUICK_ROR_UNION_SELECT(Session *session_param,
1494
: session(session_param), scans_inited(false)
1498
rowid_length= table->file->ref_length;
1499
record= head->record[0];
1500
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1501
session_param->mem_root= &alloc;
1505
* Function object that is used as the comparison function
1506
* for the priority queue in the QUICK_ROR_UNION_SELECT
1509
class compare_functor
1511
QUICK_ROR_UNION_SELECT *self;
1513
compare_functor(QUICK_ROR_UNION_SELECT *in_arg)
1515
inline bool operator()(const QUICK_SELECT_I *i, const QUICK_SELECT_I *j) const
1517
int val= self->head->file->cmp_ref(i->last_rowid,
1524
Do post-constructor initialization.
1526
QUICK_ROR_UNION_SELECT::init()
1533
int QUICK_ROR_UNION_SELECT::init()
1536
new priority_queue<QUICK_SELECT_I *, vector<QUICK_SELECT_I *>, compare_functor >(compare_functor(this));
1537
if (!(cur_rowid= (unsigned char*) alloc_root(&alloc, 2*head->file->ref_length)))
1539
prev_rowid= cur_rowid + head->file->ref_length;
1545
Initialize quick select for row retrieval.
1554
int QUICK_ROR_UNION_SELECT::reset()
1556
QUICK_SELECT_I *quick;
1558
have_prev_rowid= false;
1561
List_iterator_fast<QUICK_SELECT_I> it(quick_selects);
1562
while ((quick= it++))
1564
if (quick->init_ror_merged_scan(false))
1569
while (!queue->empty())
1572
Initialize scans for merged quick selects and put all merged quick
1573
selects into the queue.
1575
List_iterator_fast<QUICK_SELECT_I> it(quick_selects);
1576
while ((quick= it++))
1580
if ((error= quick->get_next()))
1582
if (error == HA_ERR_END_OF_FILE)
1586
quick->save_last_pos();
1590
if (head->file->ha_rnd_init(1))
1600
QUICK_ROR_UNION_SELECT::push_quick_back(QUICK_SELECT_I *quick_sel_range)
1602
return quick_selects.push_back(quick_sel_range);
1605
QUICK_ROR_UNION_SELECT::~QUICK_ROR_UNION_SELECT()
1607
while (!queue->empty())
1610
quick_selects.delete_elements();
1611
if (head->file->inited != handler::NONE)
1612
head->file->ha_rnd_end();
1613
free_root(&alloc,MYF(0));
1618
QUICK_RANGE::QUICK_RANGE()
1619
:min_key(0),max_key(0),min_length(0),max_length(0),
1620
flag(NO_MIN_RANGE | NO_MAX_RANGE),
1621
min_keypart_map(0), max_keypart_map(0)
1624
SEL_ARG::SEL_ARG(SEL_ARG &arg) :Sql_alloc()
1627
min_flag=arg.min_flag;
1628
max_flag=arg.max_flag;
1629
maybe_flag=arg.maybe_flag;
1630
maybe_null=arg.maybe_null;
1633
min_value=arg.min_value;
1634
max_value=arg.max_value;
1635
next_key_part=arg.next_key_part;
1636
use_count=1; elements=1;
1640
inline void SEL_ARG::make_root()
1642
left=right= &null_element;
1645
use_count=0; elements=1;
1648
SEL_ARG::SEL_ARG(Field *f,const unsigned char *min_value_arg,
1649
const unsigned char *max_value_arg)
1650
:min_flag(0), max_flag(0), maybe_flag(0), maybe_null(f->real_maybe_null()),
1651
elements(1), use_count(1), field(f), min_value((unsigned char*) min_value_arg),
1652
max_value((unsigned char*) max_value_arg), next(0),prev(0),
1653
next_key_part(0),color(BLACK),type(KEY_RANGE)
1655
left=right= &null_element;
1658
SEL_ARG::SEL_ARG(Field *field_,uint8_t part_,
1659
unsigned char *min_value_, unsigned char *max_value_,
1660
uint8_t min_flag_,uint8_t max_flag_,uint8_t maybe_flag_)
1661
:min_flag(min_flag_),max_flag(max_flag_),maybe_flag(maybe_flag_),
1662
part(part_),maybe_null(field_->real_maybe_null()), elements(1),use_count(1),
1663
field(field_), min_value(min_value_), max_value(max_value_),
1664
next(0),prev(0),next_key_part(0),color(BLACK),type(KEY_RANGE)
1666
left=right= &null_element;
1669
SEL_ARG *SEL_ARG::clone(RANGE_OPT_PARAM *param, SEL_ARG *new_parent,
1674
/* Bail out if we have already generated too many SEL_ARGs */
1675
if (++param->alloced_sel_args > MAX_SEL_ARGS)
1678
if (type != KEY_RANGE)
1680
if (!(tmp= new (param->mem_root) SEL_ARG(type)))
1681
return 0; // out of memory
1682
tmp->prev= *next_arg; // Link into next/prev chain
1683
(*next_arg)->next=tmp;
1688
if (!(tmp= new (param->mem_root) SEL_ARG(field,part, min_value,max_value,
1689
min_flag, max_flag, maybe_flag)))
1691
tmp->parent=new_parent;
1692
tmp->next_key_part=next_key_part;
1693
if (left != &null_element)
1694
if (!(tmp->left=left->clone(param, tmp, next_arg)))
1697
tmp->prev= *next_arg; // Link into next/prev chain
1698
(*next_arg)->next=tmp;
1701
if (right != &null_element)
1702
if (!(tmp->right= right->clone(param, tmp, next_arg)))
1705
increment_use_count(1);
1707
tmp->elements= this->elements;
1711
SEL_ARG *SEL_ARG::first()
1713
SEL_ARG *next_arg=this;
1714
if (!next_arg->left)
1715
return 0; // MAYBE_KEY
1716
while (next_arg->left != &null_element)
1717
next_arg=next_arg->left;
1721
SEL_ARG *SEL_ARG::last()
1723
SEL_ARG *next_arg=this;
1724
if (!next_arg->right)
1725
return 0; // MAYBE_KEY
1726
while (next_arg->right != &null_element)
1727
next_arg=next_arg->right;
1733
Check if a compare is ok, when one takes ranges in account
1734
Returns -2 or 2 if the ranges where 'joined' like < 2 and >= 2
1737
static int sel_cmp(Field *field, unsigned char *a, unsigned char *b, uint8_t a_flag,
1741
/* First check if there was a compare to a min or max element */
1742
if (a_flag & (NO_MIN_RANGE | NO_MAX_RANGE))
1744
if ((a_flag & (NO_MIN_RANGE | NO_MAX_RANGE)) ==
1745
(b_flag & (NO_MIN_RANGE | NO_MAX_RANGE)))
1747
return (a_flag & NO_MIN_RANGE) ? -1 : 1;
1749
if (b_flag & (NO_MIN_RANGE | NO_MAX_RANGE))
1750
return (b_flag & NO_MIN_RANGE) ? 1 : -1;
1752
if (field->real_maybe_null()) // If null is part of key
1759
goto end; // NULL where equal
1760
a++; b++; // Skip NULL marker
1762
cmp=field->key_cmp(a , b);
1763
if (cmp) return cmp < 0 ? -1 : 1; // The values differed
1765
// Check if the compared equal arguments was defined with open/closed range
1767
if (a_flag & (NEAR_MIN | NEAR_MAX))
1769
if ((a_flag & (NEAR_MIN | NEAR_MAX)) == (b_flag & (NEAR_MIN | NEAR_MAX)))
1771
if (!(b_flag & (NEAR_MIN | NEAR_MAX)))
1772
return (a_flag & NEAR_MIN) ? 2 : -2;
1773
return (a_flag & NEAR_MIN) ? 1 : -1;
1775
if (b_flag & (NEAR_MIN | NEAR_MAX))
1776
return (b_flag & NEAR_MIN) ? -2 : 2;
1777
return 0; // The elements where equal
1781
SEL_ARG *SEL_ARG::clone_tree(RANGE_OPT_PARAM *param)
1783
SEL_ARG tmp_link,*next_arg,*root;
1784
next_arg= &tmp_link;
1785
if (!(root= clone(param, (SEL_ARG *) 0, &next_arg)))
1787
next_arg->next=0; // Fix last link
1788
tmp_link.next->prev=0; // Fix first link
1789
if (root) // If not OOM
3309
4848
if (*key1 || *key2)
3311
4850
if (*key1 && !(*key1)->simple_key())
3312
flag|=CLONE_KEY1_MAYBE;
4851
flag|=CLONE_KEY1_MAYBE;
3313
4852
if (*key2 && !(*key2)->simple_key())
3314
flag|=CLONE_KEY2_MAYBE;
4853
flag|=CLONE_KEY2_MAYBE;
3315
4854
*key1=key_and(param, *key1, *key2, flag);
3316
if (*key1 && (*key1)->type == optimizer::SEL_ARG::IMPOSSIBLE)
4855
if (*key1 && (*key1)->type == SEL_ARG::IMPOSSIBLE)
3318
tree1->type= optimizer::SEL_TREE::IMPOSSIBLE;
4857
tree1->type= SEL_TREE::IMPOSSIBLE;
3321
4860
result_keys.set(key1 - tree1->keys);
4862
if (*key1 && param->alloced_sel_args < SEL_ARG::MAX_SEL_ARGS)
4863
(*key1)->test_use_count(*key1);
3324
4867
tree1->keys_map= result_keys;
3325
4868
/* dispose index_merge if there is a "range" option */
3326
4869
if (result_keys.any())
3328
tree1->merges.empty();
4871
tree1->merges.clear();
3332
4875
/* ok, both trees are index_merge trees */
3333
imerge_list_and_list(&tree1->merges, &tree2->merges);
4876
imerge_list_and_list(tree1->merges, tree2->merges);
4882
Check if two SEL_TREES can be combined into one (i.e. a single key range
4883
read can be constructed for "cond_of_tree1 OR cond_of_tree2" ) without
4887
bool sel_trees_can_be_ored(SEL_TREE *tree1, SEL_TREE *tree2,
4888
RANGE_OPT_PARAM* param)
4890
key_map common_keys= tree1->keys_map;
4891
common_keys&= tree2->keys_map;
4893
if (common_keys.none())
4896
/* trees have a common key, check if they refer to same key part */
4897
SEL_ARG **key1,**key2;
4898
for (uint32_t key_no=0; key_no < param->keys; key_no++)
4900
if (common_keys.test(key_no))
4902
key1= tree1->keys + key_no;
4903
key2= tree2->keys + key_no;
4904
if ((*key1)->part == (*key2)->part)
4915
Remove the trees that are not suitable for record retrieval.
4917
param Range analysis parameter
4918
tree Tree to be processed, tree->type is KEY or KEY_SMALLER
4921
This function walks through tree->keys[] and removes the SEL_ARG* trees
4922
that are not "maybe" trees (*) and cannot be used to construct quick range
4924
(*) - have type MAYBE or MAYBE_KEY. Perhaps we should remove trees of
4925
these types here as well.
4927
A SEL_ARG* tree cannot be used to construct quick select if it has
4928
tree->part != 0. (e.g. it could represent "keypart2 < const").
4930
WHY THIS FUNCTION IS NEEDED
4932
Normally we allow construction of SEL_TREE objects that have SEL_ARG
4933
trees that do not allow quick range select construction. For example for
4934
" keypart1=1 AND keypart2=2 " the execution will proceed as follows:
4935
tree1= SEL_TREE { SEL_ARG{keypart1=1} }
4936
tree2= SEL_TREE { SEL_ARG{keypart2=2} } -- can't make quick range select
4938
call tree_and(tree1, tree2) -- this joins SEL_ARGs into a usable SEL_ARG
4941
There is an exception though: when we construct index_merge SEL_TREE,
4942
any SEL_ARG* tree that cannot be used to construct quick range select can
4943
be removed, because current range analysis code doesn't provide any way
4944
that tree could be later combined with another tree.
4945
Consider an example: we should not construct
4947
merges = SEL_IMERGE {
4948
SEL_TREE(t.key1part1 = 1),
4949
SEL_TREE(t.key2part2 = 2) -- (*)
4953
- (*) cannot be used to construct quick range select,
4954
- There is no execution path that would cause (*) to be converted to
4955
a tree that could be used.
4957
The latter is easy to verify: first, notice that the only way to convert
4958
(*) into a usable tree is to call tree_and(something, (*)).
4960
Second look at what tree_and/tree_or function would do when passed a
4961
SEL_TREE that has the structure like st1 tree has, and conlcude that
4962
tree_and(something, (*)) will not be called.
4965
0 Ok, some suitable trees left
4966
1 No tree->keys[] left.
4969
static bool remove_nonrange_trees(RANGE_OPT_PARAM *param, SEL_TREE *tree)
4972
for (uint32_t i=0; i < param->keys; i++)
4976
if (tree->keys[i]->part)
4978
tree->keys[i]= NULL;
4979
tree->keys_map.reset(i);
4990
tree_or(RANGE_OPT_PARAM *param,SEL_TREE *tree1,SEL_TREE *tree2)
4992
if (!tree1 || !tree2)
4994
if (tree1->type == SEL_TREE::IMPOSSIBLE || tree2->type == SEL_TREE::ALWAYS)
4996
if (tree2->type == SEL_TREE::IMPOSSIBLE || tree1->type == SEL_TREE::ALWAYS)
4998
if (tree1->type == SEL_TREE::MAYBE)
4999
return(tree1); // Can't use this
5000
if (tree2->type == SEL_TREE::MAYBE)
5003
SEL_TREE *result= 0;
5004
key_map result_keys;
5005
result_keys.reset();
5006
if (sel_trees_can_be_ored(tree1, tree2, param))
5008
/* Join the trees key per key */
5009
SEL_ARG **key1,**key2,**end;
5010
for (key1= tree1->keys,key2= tree2->keys,end= key1+param->keys ;
5011
key1 != end ; key1++,key2++)
5013
*key1=key_or(param, *key1, *key2);
5016
result=tree1; // Added to tree1
5017
result_keys.set(key1 - tree1->keys);
5019
if (param->alloced_sel_args < SEL_ARG::MAX_SEL_ARGS)
5020
(*key1)->test_use_count(*key1);
5025
result->keys_map= result_keys;
5029
/* ok, two trees have KEY type but cannot be used without index merge */
5030
if ((tree1->merges.empty() == true) && (tree2->merges.empty() == true))
5032
if (param->remove_jump_scans)
5034
bool no_trees= remove_nonrange_trees(param, tree1);
5035
no_trees= no_trees || remove_nonrange_trees(param, tree2);
5037
return(new SEL_TREE(SEL_TREE::ALWAYS));
5040
/* both trees are "range" trees, produce new index merge structure. */
5041
result= new SEL_TREE();
5042
SEL_IMERGE *merge= new SEL_IMERGE();
5043
result->merges.push_back(merge);
5045
if( merge->or_sel_tree(param, tree1) || merge->or_sel_tree(param, tree2))
5048
result->type= tree1->type;
5050
else if ((tree1->merges.empty() == false) && (tree2->merges.empty() == false))
5052
if (imerge_list_or_list(param, tree1->merges, tree2->merges))
5053
result= new SEL_TREE(SEL_TREE::ALWAYS);
5059
/* one tree is index merge tree and another is range tree */
5060
if (tree1->merges.empty() == true)
5061
std::swap(tree1, tree2);
5063
if (param->remove_jump_scans && remove_nonrange_trees(param, tree2))
5064
return(new SEL_TREE(SEL_TREE::ALWAYS));
5066
/* add tree2 to tree1->merges, checking if it collapses to ALWAYS */
5067
if (imerge_list_or_tree(param, tree1->merges, tree2))
5068
result= new SEL_TREE(SEL_TREE::ALWAYS);
3339
5077
/* And key trees where key1->part < key2 -> part */
3341
static optimizer::SEL_ARG *
3342
and_all_keys(optimizer::RangeParameter *param,
3343
optimizer::SEL_ARG *key1,
3344
optimizer::SEL_ARG *key2,
5080
and_all_keys(RANGE_OPT_PARAM *param, SEL_ARG *key1, SEL_ARG *key2,
3345
5081
uint32_t clone_flag)
3347
optimizer::SEL_ARG *next= NULL;
3348
5084
ulong use_count=key1->use_count;
3350
5086
if (key1->elements != 1)
5262
key_or(RANGE_OPT_PARAM *param, SEL_ARG *key1,SEL_ARG *key2)
5282
if (key1->part != key2->part)
5286
return 0; // Can't optimize this
5289
// If one of the key is MAYBE_KEY then the found region may be bigger
5290
if (key1->type == SEL_ARG::MAYBE_KEY)
5296
if (key2->type == SEL_ARG::MAYBE_KEY)
5303
if (key1->use_count > 0)
5305
if (key2->use_count == 0 || key1->elements > key2->elements)
5307
std::swap(key1,key2);
5309
if (key1->use_count > 0 || !(key1=key1->clone_tree(param)))
5313
// Add tree at key2 to tree at key1
5314
bool key2_shared=key2->use_count != 0;
5315
key1->maybe_flag|=key2->maybe_flag;
5317
for (key2=key2->first(); key2; )
5319
SEL_ARG *tmp=key1->find_range(key2); // Find key1.min <= key2.min
5324
tmp=key1->first(); // tmp.min > key2.min
5327
else if ((cmp=tmp->cmp_max_to_min(key2)) < 0)
5328
{ // Found tmp.max < key2.min
5329
SEL_ARG *next=tmp->next;
5330
if (cmp == -2 && eq_tree(tmp->next_key_part,key2->next_key_part))
5332
// Join near ranges like tmp.max < 0 and key2.min >= 0
5333
SEL_ARG *key2_next=key2->next;
5336
if (!(key2=new SEL_ARG(*key2)))
5337
return 0; // out of memory
5338
key2->increment_use_count(key1->use_count+1);
5339
key2->next=key2_next; // New copy of key2
5341
key2->copy_min(tmp);
5342
if (!(key1=key1->tree_delete(tmp)))
5343
{ // Only one key in tree
5350
if (!(tmp=next)) // tmp.min > key2.min
5351
break; // Copy rest of key2
5354
{ // tmp.min > key2.min
5356
if ((tmp_cmp=tmp->cmp_min_to_max(key2)) > 0) // if tmp.min > key2.max
5358
if (tmp_cmp == 2 && eq_tree(tmp->next_key_part,key2->next_key_part))
5359
{ // ranges are connected
5360
tmp->copy_min_to_min(key2);
5361
key1->merge_flags(key2);
5362
if (tmp->min_flag & NO_MIN_RANGE &&
5363
tmp->max_flag & NO_MAX_RANGE)
5365
if (key1->maybe_flag)
5366
return new SEL_ARG(SEL_ARG::MAYBE_KEY);
5369
key2->increment_use_count(-1); // Free not used tree
5375
SEL_ARG *next=key2->next; // Keys are not overlapping
5378
SEL_ARG *cpy= new SEL_ARG(*key2); // Must make copy
5381
key1=key1->insert(cpy);
5382
key2->increment_use_count(key1->use_count+1);
5385
key1=key1->insert(key2); // Will destroy key2_root
5392
// tmp.max >= key2.min && tmp.min <= key.cmax(overlapping ranges)
5393
if (eq_tree(tmp->next_key_part,key2->next_key_part))
5395
if (tmp->is_same(key2))
5397
tmp->merge_flags(key2); // Copy maybe flags
5398
key2->increment_use_count(-1); // Free not used tree
5403
while (last->next && last->next->cmp_min_to_max(key2) <= 0 &&
5404
eq_tree(last->next->next_key_part,key2->next_key_part))
5408
key1=key1->tree_delete(save);
5410
last->copy_min(tmp);
5411
if (last->copy_min(key2) || last->copy_max(key2))
5414
for (; key2 ; key2=key2->next)
5415
key2->increment_use_count(-1); // Free not used tree
5416
if (key1->maybe_flag)
5417
return new SEL_ARG(SEL_ARG::MAYBE_KEY);
5425
if (cmp >= 0 && tmp->cmp_min_to_min(key2) < 0)
5426
{ // tmp.min <= x < key2.min
5427
SEL_ARG *new_arg=tmp->clone_first(key2);
5430
if ((new_arg->next_key_part= key1->next_key_part))
5431
new_arg->increment_use_count(key1->use_count+1);
5432
tmp->copy_min_to_min(key2);
5433
key1=key1->insert(new_arg);
5436
// tmp.min >= key2.min && tmp.min <= key2.max
5437
SEL_ARG key(*key2); // Get copy we can modify
5440
if (tmp->cmp_min_to_min(&key) > 0)
5441
{ // key.min <= x < tmp.min
5442
SEL_ARG *new_arg=key.clone_first(tmp);
5445
if ((new_arg->next_key_part=key.next_key_part))
5446
new_arg->increment_use_count(key1->use_count+1);
5447
key1=key1->insert(new_arg);
5449
if ((cmp=tmp->cmp_max_to_max(&key)) <= 0)
5450
{ // tmp.min. <= x <= tmp.max
5451
tmp->maybe_flag|= key.maybe_flag;
5452
key.increment_use_count(key1->use_count+1);
5453
tmp->next_key_part= key_or(param, tmp->next_key_part, key.next_key_part);
5454
if (!cmp) // Key2 is ready
5456
key.copy_max_to_min(tmp);
5457
if (!(tmp=tmp->next))
5459
SEL_ARG *tmp2= new SEL_ARG(key);
5462
key1=key1->insert(tmp2);
5466
if (tmp->cmp_min_to_max(&key) > 0)
5468
SEL_ARG *tmp2= new SEL_ARG(key);
5471
key1=key1->insert(tmp2);
5477
SEL_ARG *new_arg=tmp->clone_last(&key); // tmp.min <= x <= key.max
5480
tmp->copy_max_to_min(&key);
5481
tmp->increment_use_count(key1->use_count+1);
5482
/* Increment key count as it may be used for next loop */
5483
key.increment_use_count(1);
5484
new_arg->next_key_part= key_or(param, tmp->next_key_part, key.next_key_part);
5485
key1=key1->insert(new_arg);
5495
SEL_ARG *next=key2->next;
5498
SEL_ARG *tmp=new SEL_ARG(*key2); // Must make copy
5501
key2->increment_use_count(key1->use_count+1);
5502
key1=key1->insert(tmp);
5505
key1=key1->insert(key2); // Will destroy key2_root
5513
/* Compare if two trees are equal */
5515
static bool eq_tree(SEL_ARG* a,SEL_ARG *b)
5519
if (!a || !b || !a->is_same(b))
5521
if (a->left != &null_element && b->left != &null_element)
5523
if (!eq_tree(a->left,b->left))
5526
else if (a->left != &null_element || b->left != &null_element)
5528
if (a->right != &null_element && b->right != &null_element)
5530
if (!eq_tree(a->right,b->right))
5533
else if (a->right != &null_element || b->right != &null_element)
5535
if (a->next_key_part != b->next_key_part)
5537
if (!a->next_key_part != !b->next_key_part ||
5538
!eq_tree(a->next_key_part, b->next_key_part))
5546
SEL_ARG::insert(SEL_ARG *key)
5548
SEL_ARG *element, **par= NULL, *last_element= NULL;
5550
for (element= this; element != &null_element ; )
5552
last_element=element;
5553
if (key->cmp_min_to_min(element) > 0)
5555
par= &element->right; element= element->right;
5559
par = &element->left; element= element->left;
5563
key->parent=last_element;
5565
if (par == &last_element->left)
5567
key->next=last_element;
5568
if ((key->prev=last_element->prev))
5569
key->prev->next=key;
5570
last_element->prev=key;
5574
if ((key->next=last_element->next))
5575
key->next->prev=key;
5576
key->prev=last_element;
5577
last_element->next=key;
5579
key->left=key->right= &null_element;
5580
SEL_ARG *root=rb_insert(key); // rebalance tree
5581
root->use_count=this->use_count; // copy root info
5582
root->elements= this->elements+1;
5583
root->maybe_flag=this->maybe_flag;
5589
** Find best key with min <= given key
5590
** Because the call context this should never return 0 to get_range
5594
SEL_ARG::find_range(SEL_ARG *key)
5596
SEL_ARG *element=this,*found=0;
5600
if (element == &null_element)
5602
int cmp=element->cmp_min_to_min(key);
5608
element=element->right;
5611
element=element->left;
5617
Remove a element from the tree
5621
key Key that is to be deleted from tree (this)
5624
This also frees all sub trees that is used by the element
5627
root of new tree (with key deleted)
5631
SEL_ARG::tree_delete(SEL_ARG *key)
5633
enum leaf_color remove_color;
5634
SEL_ARG *root,*nod,**par,*fix_par;
5639
/* Unlink from list */
5641
key->prev->next=key->next;
5643
key->next->prev=key->prev;
5644
key->increment_use_count(-1);
5648
par=key->parent_ptr();
5650
if (key->left == &null_element)
5652
*par=nod=key->right;
5653
fix_par=key->parent;
5654
if (nod != &null_element)
5655
nod->parent=fix_par;
5656
remove_color= key->color;
5658
else if (key->right == &null_element)
5660
*par= nod=key->left;
5661
nod->parent=fix_par=key->parent;
5662
remove_color= key->color;
5666
SEL_ARG *tmp=key->next; // next bigger key (exist!)
5667
nod= *tmp->parent_ptr()= tmp->right; // unlink tmp from tree
5668
fix_par=tmp->parent;
5669
if (nod != &null_element)
5670
nod->parent=fix_par;
5671
remove_color= tmp->color;
5673
tmp->parent=key->parent; // Move node in place of key
5674
(tmp->left=key->left)->parent=tmp;
5675
if ((tmp->right=key->right) != &null_element)
5676
tmp->right->parent=tmp;
5677
tmp->color=key->color;
5679
if (fix_par == key) // key->right == key->next
5680
fix_par=tmp; // new parent of nod
5683
if (root == &null_element)
5684
return 0; // Maybe root later
5685
if (remove_color == BLACK)
5686
root=rb_delete_fixup(root,nod,fix_par);
5688
test_rb_tree(root,root->parent);
5689
#endif /* EXTRA_DEBUG */
5691
root->use_count=this->use_count; // Fix root counters
5692
root->elements=this->elements-1;
5693
root->maybe_flag=this->maybe_flag;
5698
/* Functions to fix up the tree after insert and delete */
5700
static void left_rotate(SEL_ARG **root,SEL_ARG *leaf)
5702
SEL_ARG *y=leaf->right;
5703
leaf->right=y->left;
5704
if (y->left != &null_element)
5705
y->left->parent=leaf;
5706
if (!(y->parent=leaf->parent))
5709
*leaf->parent_ptr()=y;
5714
static void right_rotate(SEL_ARG **root,SEL_ARG *leaf)
5716
SEL_ARG *y=leaf->left;
5717
leaf->left=y->right;
5718
if (y->right != &null_element)
5719
y->right->parent=leaf;
5720
if (!(y->parent=leaf->parent))
5723
*leaf->parent_ptr()=y;
5730
SEL_ARG::rb_insert(SEL_ARG *leaf)
5732
SEL_ARG *y,*par,*par2,*root;
5733
root= this; root->parent= 0;
5736
while (leaf != root && (par= leaf->parent)->color == RED)
5737
{ // This can't be root or 1 level under
5738
if (par == (par2= leaf->parent->parent)->left)
5741
if (y->color == RED)
5746
leaf->color=RED; /* And the loop continues */
5750
if (leaf == par->right)
5752
left_rotate(&root,leaf->parent);
5753
par=leaf; /* leaf is now parent to old leaf */
5757
right_rotate(&root,par2);
5764
if (y->color == RED)
5769
leaf->color=RED; /* And the loop continues */
5773
if (leaf == par->left)
5775
right_rotate(&root,par);
5780
left_rotate(&root,par2);
5787
test_rb_tree(root,root->parent);
5788
#endif /* EXTRA_DEBUG */
5794
SEL_ARG *rb_delete_fixup(SEL_ARG *root,SEL_ARG *key,SEL_ARG *par)
5800
while (x != root && x->color == SEL_ARG::BLACK)
5805
if (w->color == SEL_ARG::RED)
5807
w->color=SEL_ARG::BLACK;
5808
par->color=SEL_ARG::RED;
5809
left_rotate(&root,par);
5812
if (w->left->color == SEL_ARG::BLACK && w->right->color == SEL_ARG::BLACK)
5814
w->color=SEL_ARG::RED;
5819
if (w->right->color == SEL_ARG::BLACK)
5821
w->left->color=SEL_ARG::BLACK;
5822
w->color=SEL_ARG::RED;
5823
right_rotate(&root,w);
5826
w->color=par->color;
5827
par->color=SEL_ARG::BLACK;
5828
w->right->color=SEL_ARG::BLACK;
5829
left_rotate(&root,par);
5837
if (w->color == SEL_ARG::RED)
5839
w->color=SEL_ARG::BLACK;
5840
par->color=SEL_ARG::RED;
5841
right_rotate(&root,par);
5844
if (w->right->color == SEL_ARG::BLACK && w->left->color == SEL_ARG::BLACK)
5846
w->color=SEL_ARG::RED;
5851
if (w->left->color == SEL_ARG::BLACK)
5853
w->right->color=SEL_ARG::BLACK;
5854
w->color=SEL_ARG::RED;
5855
left_rotate(&root,w);
5858
w->color=par->color;
5859
par->color=SEL_ARG::BLACK;
5860
w->left->color=SEL_ARG::BLACK;
5861
right_rotate(&root,par);
5868
x->color=SEL_ARG::BLACK;
5873
/* Test that the properties for a red-black tree hold */
5876
int test_rb_tree(SEL_ARG *element,SEL_ARG *parent)
5878
int count_l,count_r;
5880
if (element == &null_element)
5881
return 0; // Found end of tree
5882
if (element->parent != parent)
5884
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Parent doesn't point at parent");
5887
if (element->color == SEL_ARG::RED &&
5888
(element->left->color == SEL_ARG::RED ||
5889
element->right->color == SEL_ARG::RED))
5891
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Found two red in a row");
5894
if (element->left == element->right && element->left != &null_element)
5896
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Found right == left");
5899
count_l=test_rb_tree(element->left,element);
5900
count_r=test_rb_tree(element->right,element);
5901
if (count_l >= 0 && count_r >= 0)
5903
if (count_l == count_r)
5904
return count_l+(element->color == SEL_ARG::BLACK);
5905
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Incorrect black-count: %d - %d",
5908
return -1; // Error, no more warnings
5913
Count how many times SEL_ARG graph "root" refers to its part "key"
5916
count_key_part_usage()
5917
root An RB-Root node in a SEL_ARG graph.
5918
key Another RB-Root node in that SEL_ARG graph.
5921
The passed "root" node may refer to "key" node via root->next_key_part,
5924
This function counts how many times the node "key" is referred (via
5925
SEL_ARG::next_key_part) by
5926
- intervals of RB-tree pointed by "root",
5927
- intervals of RB-trees that are pointed by SEL_ARG::next_key_part from
5928
intervals of RB-tree pointed by "root",
5931
Here is an example (horizontal links represent next_key_part pointers,
5932
vertical links - next/prev prev pointers):
5935
|root|-----------------+
5939
+----+ +---+ $ | +---+ Here the return value
5940
| |- ... -| |---$-+--+->|key| will be 4.
5941
+----+ +---+ $ | | +---+
5946
| |---| |---------+ |
5953
Number of links to "key" from nodes reachable from "root".
5956
static ulong count_key_part_usage(SEL_ARG *root, SEL_ARG *key)
5959
for (root=root->first(); root ; root=root->next)
5961
if (root->next_key_part)
5963
if (root->next_key_part == key)
5965
if (root->next_key_part->part < key->part)
5966
count+=count_key_part_usage(root->next_key_part,key);
5974
Check if SEL_ARG::use_count value is correct
5977
SEL_ARG::test_use_count()
5978
root The root node of the SEL_ARG graph (an RB-tree root node that
5979
has the least value of sel_arg->part in the entire graph, and
5980
thus is the "origin" of the graph)
5983
Check if SEL_ARG::use_count value is correct. See the definition of
5984
use_count for what is "correct".
5987
void SEL_ARG::test_use_count(SEL_ARG *root)
5990
if (this == root && use_count != 1)
5992
errmsg_printf(ERRMSG_LVL_INFO, "Use_count: Wrong count %lu for root",use_count);
5995
if (this->type != SEL_ARG::KEY_RANGE)
5997
for (SEL_ARG *pos=first(); pos ; pos=pos->next)
6000
if (pos->next_key_part)
6002
ulong count=count_key_part_usage(root,pos->next_key_part);
6003
if (count > pos->next_key_part->use_count)
6005
errmsg_printf(ERRMSG_LVL_INFO, "Use_count: Wrong count for key at 0x%lx, %lu "
6006
"should be %lu", (long unsigned int)pos,
6007
pos->next_key_part->use_count, count);
6010
pos->next_key_part->test_use_count(root);
6013
if (e_count != elements)
6014
errmsg_printf(ERRMSG_LVL_WARN, "Wrong use count: %u (should be %u) for tree at 0x%lx",
6015
e_count, elements, (long unsigned int) this);
3534
6020
/****************************************************************************
3535
6021
MRR Range Sequence Interface implementation that walks a SEL_ARG* tree.
3536
6022
****************************************************************************/
4350
Range sequence interface implementation for array<QuickRange>: initialize
6849
Perform key scans for all used indexes (except CPK), get rowids and merge
6850
them into an ordered non-recurrent sequence of rowids.
6852
The merge/duplicate removal is performed using Unique class. We put all
6853
rowids into Unique, get the sorted sequence and destroy the Unique.
6855
If table has a clustered primary key that covers all rows (true for bdb
6856
and innodb currently) and one of the index_merge scans is a scan on PK,
6857
then rows that will be retrieved by PK scan are not put into Unique and
6858
primary key scan is not performed here, it is performed later separately.
6865
int QUICK_INDEX_MERGE_SELECT::read_keys_and_merge()
6867
List_iterator_fast<QUICK_RANGE_SELECT> cur_quick_it(quick_selects);
6868
QUICK_RANGE_SELECT* cur_quick;
6871
handler *file= head->file;
6873
file->extra(HA_EXTRA_KEYREAD);
6874
head->prepare_for_position();
6876
cur_quick_it.rewind();
6877
cur_quick= cur_quick_it++;
6878
assert(cur_quick != 0);
6881
We reuse the same instance of handler so we need to call both init and
6884
if (cur_quick->init() || cur_quick->reset())
6887
unique= new Unique(refpos_order_cmp, (void *)file,
6889
session->variables.sortbuff_size);
6894
while ((result= cur_quick->get_next()) == HA_ERR_END_OF_FILE)
6896
cur_quick->range_end();
6897
cur_quick= cur_quick_it++;
6901
if (cur_quick->file->inited != handler::NONE)
6902
cur_quick->file->ha_index_end();
6903
if (cur_quick->init() || cur_quick->reset())
6909
if (result != HA_ERR_END_OF_FILE)
6911
cur_quick->range_end();
6917
if (session->killed)
6920
/* skip row if it will be retrieved by clustered PK scan */
6921
if (pk_quick_select && pk_quick_select->row_in_ranges())
6924
cur_quick->file->position(cur_quick->record);
6925
result= unique->unique_add((char*)cur_quick->file->ref);
6931
/* ok, all row ids are in Unique */
6932
result= unique->get(head);
6934
doing_pk_scan= false;
6935
/* index_merge currently doesn't support "using index" at all */
6936
file->extra(HA_EXTRA_NO_KEYREAD);
6937
/* start table scan */
6938
init_read_record(&read_record, session, head, (SQL_SELECT*) 0, 1, 1);
6944
Get next row for index_merge.
6946
The rows are read from
6947
1. rowids stored in Unique.
6948
2. QUICK_RANGE_SELECT with clustered primary key (if any).
6949
The sets of rows retrieved in 1) and 2) are guaranteed to be disjoint.
6952
int QUICK_INDEX_MERGE_SELECT::get_next()
6957
return(pk_quick_select->get_next());
6959
if ((result= read_record.read_record(&read_record)) == -1)
6961
result= HA_ERR_END_OF_FILE;
6962
end_read_record(&read_record);
6963
/* All rows from Unique have been retrieved, do a clustered PK scan */
6964
if (pk_quick_select)
6966
doing_pk_scan= true;
6967
if ((result= pk_quick_select->init()) ||
6968
(result= pk_quick_select->reset()))
6970
return(pk_quick_select->get_next());
6979
Retrieve next record.
6981
QUICK_ROR_INTERSECT_SELECT::get_next()
6984
Invariant on enter/exit: all intersected selects have retrieved all index
6985
records with rowid <= some_rowid_val and no intersected select has
6986
retrieved any index records with rowid > some_rowid_val.
6987
We start fresh and loop until we have retrieved the same rowid in each of
6988
the key scans or we got an error.
6990
If a Clustered PK scan is present, it is used only to check if row
6991
satisfies its condition (and never used for row retrieval).
6995
other - Error code if any error occurred.
6998
int QUICK_ROR_INTERSECT_SELECT::get_next()
7000
List_iterator_fast<QUICK_RANGE_SELECT> quick_it(quick_selects);
7001
QUICK_RANGE_SELECT* quick;
7003
uint32_t last_rowid_count=0;
7007
/* Get a rowid for first quick and save it as a 'candidate' */
7009
error= quick->get_next();
7012
while (!error && !cpk_quick->row_in_ranges())
7013
error= quick->get_next();
7018
quick->file->position(quick->record);
7019
memcpy(last_rowid, quick->file->ref, head->file->ref_length);
7020
last_rowid_count= 1;
7022
while (last_rowid_count < quick_selects.elements)
7024
if (!(quick= quick_it++))
7032
if ((error= quick->get_next()))
7034
quick->file->position(quick->record);
7035
cmp= head->file->cmp_ref(quick->file->ref, last_rowid);
7038
/* Ok, current select 'caught up' and returned ref >= cur_ref */
7041
/* Found a row with ref > cur_ref. Make it a new 'candidate' */
7044
while (!cpk_quick->row_in_ranges())
7046
if ((error= quick->get_next()))
7050
memcpy(last_rowid, quick->file->ref, head->file->ref_length);
7051
last_rowid_count= 1;
7055
/* current 'candidate' row confirmed by this select */
7060
/* We get here if we got the same row ref in all scans. */
7061
if (need_to_fetch_row)
7062
error= head->file->rnd_pos(head->record[0], last_rowid);
7063
} while (error == HA_ERR_RECORD_DELETED);
7069
Retrieve next record.
7071
QUICK_ROR_UNION_SELECT::get_next()
7074
Enter/exit invariant:
7075
For each quick select in the queue a {key,rowid} tuple has been
7076
retrieved but the corresponding row hasn't been passed to output.
7080
other - Error code if any error occurred.
7083
int QUICK_ROR_UNION_SELECT::get_next()
7086
QUICK_SELECT_I *quick;
7094
return(HA_ERR_END_OF_FILE);
7095
/* Ok, we have a queue with >= 1 scans */
7097
quick= queue->top();
7098
memcpy(cur_rowid, quick->last_rowid, rowid_length);
7100
/* put into queue rowid from the same stream as top element */
7101
if ((error= quick->get_next()))
7103
if (error != HA_ERR_END_OF_FILE)
7109
quick->save_last_pos();
7114
if (!have_prev_rowid)
7116
/* No rows have been returned yet */
7118
have_prev_rowid= true;
7121
dup_row= !head->file->cmp_ref(cur_rowid, prev_rowid);
7125
cur_rowid= prev_rowid;
7128
error= head->file->rnd_pos(quick->record, prev_rowid);
7129
} while (error == HA_ERR_RECORD_DELETED);
7134
int QUICK_RANGE_SELECT::reset()
7137
unsigned char *mrange_buff;
7139
HANDLER_BUFFER empty_buf;
7141
cur_range= (QUICK_RANGE**) ranges.buffer;
7143
if (file->inited == handler::NONE && (error= file->ha_index_init(index,1)))
7146
/* Allocate buffer if we need one but haven't allocated it yet */
7147
if (mrr_buf_size && !mrr_buf_desc)
7149
buf_size= mrr_buf_size;
7150
while (buf_size && !my_multi_malloc(MYF(MY_WME),
7151
&mrr_buf_desc, sizeof(*mrr_buf_desc),
7152
&mrange_buff, buf_size,
7155
/* Try to shrink the buffers until both are 0. */
7159
return(HA_ERR_OUT_OF_MEM);
7161
/* Initialize the handler buffer. */
7162
mrr_buf_desc->buffer= mrange_buff;
7163
mrr_buf_desc->buffer_end= mrange_buff + buf_size;
7164
mrr_buf_desc->end_of_used_area= mrange_buff;
7168
empty_buf.buffer= empty_buf.buffer_end= empty_buf.end_of_used_area= NULL;
7171
mrr_flags |= HA_MRR_SORTED;
7172
RANGE_SEQ_IF seq_funcs= {quick_range_seq_init, quick_range_seq_next};
7173
error= file->multi_range_read_init(&seq_funcs, (void*)this, ranges.elements,
7174
mrr_flags, mrr_buf_desc? mrr_buf_desc:
7181
Range sequence interface implementation for array<QUICK_RANGE>: initialize
4353
7184
quick_range_seq_init()
4354
init_param Caller-opaque paramenter: QuickRangeSelect* pointer
7185
init_param Caller-opaque paramenter: QUICK_RANGE_SELECT* pointer
4355
7186
n_ranges Number of ranges in the sequence (ignored)
4356
7187
flags MRR flags (currently not used)
4383
7214
1 No more ranges in the sequence
4385
uint32_t optimizer::quick_range_seq_next(range_seq_t rseq, KEY_MULTI_RANGE *range)
7217
uint32_t quick_range_seq_next(range_seq_t rseq, KEY_MULTI_RANGE *range)
4387
QuickRangeSequenceContext *ctx= (QuickRangeSequenceContext*) rseq;
7219
QUICK_RANGE_SEQ_CTX *ctx= (QUICK_RANGE_SEQ_CTX*)rseq;
4389
7221
if (ctx->cur == ctx->last)
4390
7222
return 1; /* no more ranges */
4392
optimizer::QuickRange *cur= *(ctx->cur);
7224
QUICK_RANGE *cur= *(ctx->cur);
4393
7225
key_range *start_key= &range->start_key;
4394
key_range *end_key= &range->end_key;
7226
key_range *end_key= &range->end_key;
4396
start_key->key= cur->min_key;
7228
start_key->key= cur->min_key;
4397
7229
start_key->length= cur->min_length;
4398
7230
start_key->keypart_map= cur->min_keypart_map;
4399
start_key->flag= ((cur->flag & NEAR_MIN) ? HA_READ_AFTER_KEY :
4400
(cur->flag & EQ_RANGE) ?
4401
HA_READ_KEY_EXACT : HA_READ_KEY_OR_NEXT);
4402
end_key->key= cur->max_key;
4403
end_key->length= cur->max_length;
7231
start_key->flag= ((cur->flag & NEAR_MIN) ? HA_READ_AFTER_KEY :
7232
(cur->flag & EQ_RANGE) ?
7233
HA_READ_KEY_EXACT : HA_READ_KEY_OR_NEXT);
7234
end_key->key= cur->max_key;
7235
end_key->length= cur->max_length;
4404
7236
end_key->keypart_map= cur->max_keypart_map;
4406
7238
We use HA_READ_AFTER_KEY here because if we are reading on a key
4407
7239
prefix. We want to find all keys with this prefix.
4409
end_key->flag= (cur->flag & NEAR_MAX ? HA_READ_BEFORE_KEY :
7241
end_key->flag= (cur->flag & NEAR_MAX ? HA_READ_BEFORE_KEY :
4411
7243
range->range_flag= cur->flag;
7250
Get next possible record using quick-struct.
7253
QUICK_RANGE_SELECT::get_next()
7256
Record is read into table->record[0]
7260
HA_ERR_END_OF_FILE No (more) rows in range
7264
int QUICK_RANGE_SELECT::get_next()
7267
if (in_ror_merged_scan)
7270
We don't need to signal the bitmap change as the bitmap is always the
7271
same for this head->file
7273
head->column_bitmaps_set(&column_bitmap, &column_bitmap);
7276
int result= file->multi_range_read_next(&dummy);
7278
if (in_ror_merged_scan)
7280
/* Restore bitmaps set on entry */
7281
head->column_bitmaps_set(save_read_set, save_write_set);
7288
Get the next record with a different prefix.
7291
QUICK_RANGE_SELECT::get_next_prefix()
7292
prefix_length length of cur_prefix
7293
cur_prefix prefix of a key to be searched for
7296
Each subsequent call to the method retrieves the first record that has a
7297
prefix with length prefix_length different from cur_prefix, such that the
7298
record with the new prefix is within the ranges described by
7299
this->ranges. The record found is stored into the buffer pointed by
7301
The method is useful for GROUP-BY queries with range conditions to
7302
discover the prefix of the next group that satisfies the range conditions.
7305
This method is a modified copy of QUICK_RANGE_SELECT::get_next(), so both
7306
methods should be unified into a more general one to reduce code
7311
HA_ERR_END_OF_FILE if returned all keys
7312
other if some error occurred
7315
int QUICK_RANGE_SELECT::get_next_prefix(uint32_t prefix_length,
7316
key_part_map keypart_map,
7317
unsigned char *cur_prefix)
7322
key_range start_key, end_key;
7325
/* Read the next record in the same range with prefix after cur_prefix. */
7326
assert(cur_prefix != 0);
7327
result= file->index_read_map(record, cur_prefix, keypart_map,
7329
if (result || (file->compare_key(file->end_range) <= 0))
7333
uint32_t count= ranges.elements - (cur_range - (QUICK_RANGE**) ranges.buffer);
7336
/* Ranges have already been used up before. None is left for read. */
7338
return HA_ERR_END_OF_FILE;
7340
last_range= *(cur_range++);
7342
start_key.key= (const unsigned char*) last_range->min_key;
7343
start_key.length= min(last_range->min_length, (uint16_t)prefix_length);
7344
start_key.keypart_map= last_range->min_keypart_map & keypart_map;
7345
start_key.flag= ((last_range->flag & NEAR_MIN) ? HA_READ_AFTER_KEY :
7346
(last_range->flag & EQ_RANGE) ?
7347
HA_READ_KEY_EXACT : HA_READ_KEY_OR_NEXT);
7348
end_key.key= (const unsigned char*) last_range->max_key;
7349
end_key.length= min(last_range->max_length, (uint16_t)prefix_length);
7350
end_key.keypart_map= last_range->max_keypart_map & keypart_map;
7352
We use READ_AFTER_KEY here because if we are reading on a key
7353
prefix we want to find all keys with this prefix
7355
end_key.flag= (last_range->flag & NEAR_MAX ? HA_READ_BEFORE_KEY :
7358
result= file->read_range_first(last_range->min_keypart_map ? &start_key : 0,
7359
last_range->max_keypart_map ? &end_key : 0,
7360
test(last_range->flag & EQ_RANGE),
7362
if (last_range->flag == (UNIQUE_RANGE | EQ_RANGE))
7363
last_range= 0; // Stop searching
7365
if (result != HA_ERR_END_OF_FILE)
7367
last_range= 0; // No matching rows; go to next range
7373
Check if current row will be retrieved by this QUICK_RANGE_SELECT
7376
It is assumed that currently a scan is being done on another index
7377
which reads all necessary parts of the index that is scanned by this
7379
The implementation does a binary search on sorted array of disjoint
7380
ranges, without taking size of range into account.
7382
This function is used to filter out clustered PK scan rows in
7383
index_merge quick select.
7386
true if current row will be retrieved by this quick select
7390
bool QUICK_RANGE_SELECT::row_in_ranges()
7394
uint32_t max= ranges.elements - 1;
7395
uint32_t mid= (max + min)/2;
7399
if (cmp_next(*(QUICK_RANGE**)dynamic_array_ptr(&ranges, mid)))
7401
/* current row value > mid->max */
7406
mid= (min + max) / 2;
7408
res= *(QUICK_RANGE**)dynamic_array_ptr(&ranges, mid);
7409
return (!cmp_next(res) && !cmp_prev(res));
7413
This is a hack: we inherit from QUICK_SELECT so that we can use the
7414
get_next() interface, but we have to hold a pointer to the original
7415
QUICK_SELECT because its data are used all over the place. What
7416
should be done is to factor out the data that is needed into a base
7417
class (QUICK_SELECT), and then have two subclasses (_ASC and _DESC)
7418
which handle the ranges and implement the get_next() function. But
7419
for now, this seems to work right at least.
7422
QUICK_SELECT_DESC::QUICK_SELECT_DESC(QUICK_RANGE_SELECT *q, uint32_t, bool *)
7423
:QUICK_RANGE_SELECT(*q), rev_it(rev_ranges)
7427
QUICK_RANGE **pr= (QUICK_RANGE**)ranges.buffer;
7428
QUICK_RANGE **end_range= pr + ranges.elements;
7429
for (; pr!=end_range; pr++)
7430
rev_ranges.push_front(*pr);
7432
/* Remove EQ_RANGE flag for keys that are not using the full key */
7433
for (r = rev_it++; r; r = rev_it++)
7435
if ((r->flag & EQ_RANGE) &&
7436
head->key_info[index].key_length != r->max_length)
7437
r->flag&= ~EQ_RANGE;
7440
q->dont_free=1; // Don't free shared mem
7445
int QUICK_SELECT_DESC::get_next()
7447
/* The max key is handled as follows:
7448
* - if there is NO_MAX_RANGE, start at the end and move backwards
7449
* - if it is an EQ_RANGE, which means that max key covers the entire
7450
* key, go directly to the key and read through it (sorting backwards is
7451
* same as sorting forwards)
7452
* - if it is NEAR_MAX, go to the key or next, step back once, and
7454
* - otherwise (not NEAR_MAX == include the key), go after the key,
7455
* step back once, and move backwards
7462
{ // Already read through key
7463
result = ((last_range->flag & EQ_RANGE)
7464
? file->index_next_same(record, last_range->min_key,
7465
last_range->min_length) :
7466
file->index_prev(record));
7469
if (cmp_prev(*rev_it.ref()) == 0)
7472
else if (result != HA_ERR_END_OF_FILE)
7476
if (!(last_range= rev_it++))
7477
return HA_ERR_END_OF_FILE; // All ranges used
7479
if (last_range->flag & NO_MAX_RANGE) // Read last record
7482
if ((local_error=file->index_last(record)))
7483
return(local_error); // Empty table
7484
if (cmp_prev(last_range) == 0)
7486
last_range= 0; // No match; go to next range
7490
if (last_range->flag & EQ_RANGE)
7492
result = file->index_read_map(record, last_range->max_key,
7493
last_range->max_keypart_map,
7498
assert(last_range->flag & NEAR_MAX ||
7499
range_reads_after_key(last_range));
7500
result=file->index_read_map(record, last_range->max_key,
7501
last_range->max_keypart_map,
7502
((last_range->flag & NEAR_MAX) ?
7503
HA_READ_BEFORE_KEY :
7504
HA_READ_PREFIX_LAST_OR_PREV));
7508
if (result != HA_ERR_KEY_NOT_FOUND && result != HA_ERR_END_OF_FILE)
7510
last_range= 0; // Not found, to next range
7513
if (cmp_prev(last_range) == 0)
7515
if (last_range->flag == (UNIQUE_RANGE | EQ_RANGE))
7516
last_range= 0; // Stop searching
7517
return 0; // Found key is in range
7519
last_range= 0; // To next range
7525
Compare if found key is over max-value
7526
Returns 0 if key <= range->max_key
7527
TODO: Figure out why can't this function be as simple as cmp_prev().
7530
int QUICK_RANGE_SELECT::cmp_next(QUICK_RANGE *range_arg)
7532
if (range_arg->flag & NO_MAX_RANGE)
7533
return 0; /* key can't be to large */
7535
KEY_PART *key_part=key_parts;
7536
uint32_t store_length;
7538
for (unsigned char *key=range_arg->max_key, *end=key+range_arg->max_length;
7540
key+= store_length, key_part++)
7543
store_length= key_part->store_length;
7544
if (key_part->null_bit)
7548
if (!key_part->field->is_null())
7552
else if (key_part->field->is_null())
7554
key++; // Skip null byte
7557
if ((cmp=key_part->field->key_cmp(key, key_part->length)) < 0)
7562
return (range_arg->flag & NEAR_MAX) ? 1 : 0; // Exact match
7567
Returns 0 if found key is inside range (found key >= range->min_key).
7570
int QUICK_RANGE_SELECT::cmp_prev(QUICK_RANGE *range_arg)
7573
if (range_arg->flag & NO_MIN_RANGE)
7574
return 0; /* key can't be to small */
7576
cmp= key_cmp(key_part_info, range_arg->min_key,
7577
range_arg->min_length);
7578
if (cmp > 0 || (cmp == 0 && (range_arg->flag & NEAR_MIN) == false))
7580
return 1; // outside of range
7585
* true if this range will require using HA_READ_AFTER_KEY
7586
See comment in get_next() about this
7589
bool QUICK_SELECT_DESC::range_reads_after_key(QUICK_RANGE *range_arg)
7591
return ((range_arg->flag & (NO_MAX_RANGE | NEAR_MAX)) ||
7592
!(range_arg->flag & EQ_RANGE) ||
7593
head->key_info[index].key_length != range_arg->max_length) ? 1 : 0;
7597
void QUICK_RANGE_SELECT::add_info_string(String *str)
7599
KEY *key_info= head->key_info + index;
7600
str->append(key_info->name);
7603
void QUICK_INDEX_MERGE_SELECT::add_info_string(String *str)
7605
QUICK_RANGE_SELECT *quick;
7607
List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
7608
str->append(STRING_WITH_LEN("sort_union("));
7609
while ((quick= it++))
7615
quick->add_info_string(str);
7617
if (pk_quick_select)
7620
pk_quick_select->add_info_string(str);
7625
void QUICK_ROR_INTERSECT_SELECT::add_info_string(String *str)
7628
QUICK_RANGE_SELECT *quick;
7629
List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
7630
str->append(STRING_WITH_LEN("intersect("));
7631
while ((quick= it++))
7633
KEY *key_info= head->key_info + quick->index;
7638
str->append(key_info->name);
7642
KEY *key_info= head->key_info + cpk_quick->index;
7644
str->append(key_info->name);
7649
void QUICK_ROR_UNION_SELECT::add_info_string(String *str)
7652
QUICK_SELECT_I *quick;
7653
List_iterator_fast<QUICK_SELECT_I> it(quick_selects);
7654
str->append(STRING_WITH_LEN("union("));
7655
while ((quick= it++))
7661
quick->add_info_string(str);
7667
void QUICK_RANGE_SELECT::add_keys_and_lengths(String *key_names,
7668
String *used_lengths)
7672
KEY *key_info= head->key_info + index;
7673
key_names->append(key_info->name);
7674
length= int64_t2str(max_used_key_length, buf, 10) - buf;
7675
used_lengths->append(buf, length);
7678
void QUICK_INDEX_MERGE_SELECT::add_keys_and_lengths(String *key_names,
7679
String *used_lengths)
7684
QUICK_RANGE_SELECT *quick;
7686
List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
7687
while ((quick= it++))
7693
key_names->append(',');
7694
used_lengths->append(',');
7697
KEY *key_info= head->key_info + quick->index;
7698
key_names->append(key_info->name);
7699
length= int64_t2str(quick->max_used_key_length, buf, 10) - buf;
7700
used_lengths->append(buf, length);
7702
if (pk_quick_select)
7704
KEY *key_info= head->key_info + pk_quick_select->index;
7705
key_names->append(',');
7706
key_names->append(key_info->name);
7707
length= int64_t2str(pk_quick_select->max_used_key_length, buf, 10) - buf;
7708
used_lengths->append(',');
7709
used_lengths->append(buf, length);
7713
void QUICK_ROR_INTERSECT_SELECT::add_keys_and_lengths(String *key_names,
7714
String *used_lengths)
7719
QUICK_RANGE_SELECT *quick;
7720
List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
7721
while ((quick= it++))
7723
KEY *key_info= head->key_info + quick->index;
7728
key_names->append(',');
7729
used_lengths->append(',');
7731
key_names->append(key_info->name);
7732
length= int64_t2str(quick->max_used_key_length, buf, 10) - buf;
7733
used_lengths->append(buf, length);
7738
KEY *key_info= head->key_info + cpk_quick->index;
7739
key_names->append(',');
7740
key_names->append(key_info->name);
7741
length= int64_t2str(cpk_quick->max_used_key_length, buf, 10) - buf;
7742
used_lengths->append(',');
7743
used_lengths->append(buf, length);
7747
void QUICK_ROR_UNION_SELECT::add_keys_and_lengths(String *key_names,
7748
String *used_lengths)
7751
QUICK_SELECT_I *quick;
7752
List_iterator_fast<QUICK_SELECT_I> it(quick_selects);
7753
while ((quick= it++))
7759
used_lengths->append(',');
7760
key_names->append(',');
7762
quick->add_keys_and_lengths(key_names, used_lengths);
7767
/*******************************************************************************
7768
* Implementation of QUICK_GROUP_MIN_MAX_SELECT
7769
*******************************************************************************/
4417
7771
static inline uint32_t get_field_keypart(KEY *index, Field *field);
4419
static inline optimizer::SEL_ARG * get_index_range_tree(uint32_t index,
4420
optimizer::SEL_TREE *range_tree,
4421
optimizer::Parameter *param,
4422
uint32_t *param_idx);
4424
static bool get_constant_key_infix(KEY *index_info,
4425
optimizer::SEL_ARG *index_range_tree,
4426
KEY_PART_INFO *first_non_group_part,
4427
KEY_PART_INFO *min_max_arg_part,
4428
KEY_PART_INFO *last_part,
4430
unsigned char *key_infix,
4431
uint32_t *key_infix_len,
4432
KEY_PART_INFO **first_non_infix_part);
7772
static inline SEL_ARG * get_index_range_tree(uint32_t index, SEL_TREE* range_tree,
7773
PARAM *param, uint32_t *param_idx);
7774
static bool get_constant_key_infix(KEY *index_info, SEL_ARG *index_range_tree,
7775
KEY_PART_INFO *first_non_group_part,
7776
KEY_PART_INFO *min_max_arg_part,
7777
KEY_PART_INFO *last_part, Session *session,
7778
unsigned char *key_infix, uint32_t *key_infix_len,
7779
KEY_PART_INFO **first_non_infix_part);
4434
7780
static bool check_group_min_max_predicates(COND *cond, Item_field *min_max_arg_item);
4437
cost_group_min_max(Table* table,
4439
uint32_t used_key_parts,
4440
uint32_t group_key_parts,
4441
optimizer::SEL_TREE *range_tree,
4442
optimizer::SEL_ARG *index_tree,
4443
ha_rows quick_prefix_records,
7783
cost_group_min_max(Table* table, KEY *index_info, uint32_t used_key_parts,
7784
uint32_t group_key_parts, SEL_TREE *range_tree,
7785
SEL_ARG *index_tree, ha_rows quick_prefix_records,
7786
bool have_min, bool have_max,
7787
double *read_cost, ha_rows *records);
5567
optimizer::QuickSelectInterface *optimizer::RangeReadPlan::make_quick(optimizer::Parameter *param, bool, memory::Root *parent_alloc)
5569
optimizer::QuickRangeSelect *quick= NULL;
5570
if ((quick= optimizer::get_quick_select(param,
5577
quick->records= records;
5578
quick->read_time= read_cost;
5584
} /* namespace drizzled */
8846
Construct new quick select for group queries with min/max.
8849
QUICK_GROUP_MIN_MAX_SELECT::QUICK_GROUP_MIN_MAX_SELECT()
8850
table The table being accessed
8851
join Descriptor of the current query
8852
have_min true if the query selects a MIN function
8853
have_max true if the query selects a MAX function
8854
min_max_arg_part The only argument field of all MIN/MAX functions
8855
group_prefix_len Length of all key parts in the group prefix
8856
prefix_key_parts All key parts in the group prefix
8857
index_info The index chosen for data access
8858
use_index The id of index_info
8859
read_cost Cost of this access method
8860
records Number of records returned
8861
key_infix_len Length of the key infix appended to the group prefix
8862
key_infix Infix of constants from equality predicates
8863
parent_alloc Memory pool for this and quick_prefix_select data
8869
QUICK_GROUP_MIN_MAX_SELECT::
8870
QUICK_GROUP_MIN_MAX_SELECT(Table *table, JOIN *join_arg, bool have_min_arg,
8872
KEY_PART_INFO *min_max_arg_part_arg,
8873
uint32_t group_prefix_len_arg, uint32_t group_key_parts_arg,
8874
uint32_t used_key_parts_arg, KEY *index_info_arg,
8875
uint32_t use_index, double read_cost_arg,
8876
ha_rows records_arg, uint32_t key_infix_len_arg,
8877
unsigned char *key_infix_arg, MEM_ROOT *parent_alloc)
8878
:join(join_arg), index_info(index_info_arg),
8879
group_prefix_len(group_prefix_len_arg),
8880
group_key_parts(group_key_parts_arg), have_min(have_min_arg),
8881
have_max(have_max_arg), seen_first_key(false),
8882
min_max_arg_part(min_max_arg_part_arg), key_infix(key_infix_arg),
8883
key_infix_len(key_infix_len_arg), min_functions_it(NULL),
8884
max_functions_it(NULL)
8889
record= head->record[0];
8890
tmp_record= head->record[1];
8891
read_time= read_cost_arg;
8892
records= records_arg;
8893
used_key_parts= used_key_parts_arg;
8894
real_key_parts= used_key_parts_arg;
8895
real_prefix_len= group_prefix_len + key_infix_len;
8897
min_max_arg_len= min_max_arg_part ? min_max_arg_part->store_length : 0;
8900
We can't have parent_alloc set as the init function can't handle this case
8903
assert(!parent_alloc);
8906
init_sql_alloc(&alloc, join->session->variables.range_alloc_block_size, 0);
8907
join->session->mem_root= &alloc;
8910
memset(&alloc, 0, sizeof(MEM_ROOT)); // ensure that it's not used
8915
Do post-constructor initialization.
8918
QUICK_GROUP_MIN_MAX_SELECT::init()
8921
The method performs initialization that cannot be done in the constructor
8922
such as memory allocations that may fail. It allocates memory for the
8923
group prefix and inifix buffers, and for the lists of MIN/MAX item to be
8924
updated during execution.
8931
int QUICK_GROUP_MIN_MAX_SELECT::init()
8933
if (group_prefix) /* Already initialized. */
8936
if (!(last_prefix= (unsigned char*) alloc_root(&alloc, group_prefix_len)))
8939
We may use group_prefix to store keys with all select fields, so allocate
8940
enough space for it.
8942
if (!(group_prefix= (unsigned char*) alloc_root(&alloc,
8943
real_prefix_len + min_max_arg_len)))
8946
if (key_infix_len > 0)
8949
The memory location pointed to by key_infix will be deleted soon, so
8950
allocate a new buffer and copy the key_infix into it.
8952
unsigned char *tmp_key_infix= (unsigned char*) alloc_root(&alloc, key_infix_len);
8955
memcpy(tmp_key_infix, this->key_infix, key_infix_len);
8956
this->key_infix= tmp_key_infix;
8959
if (min_max_arg_part)
8961
if (my_init_dynamic_array(&min_max_ranges, sizeof(QUICK_RANGE*), 16, 16))
8966
if (!(min_functions= new List<Item_sum>))
8970
min_functions= NULL;
8973
if (!(max_functions= new List<Item_sum>))
8977
max_functions= NULL;
8979
Item_sum *min_max_item;
8980
Item_sum **func_ptr= join->sum_funcs;
8981
while ((min_max_item= *(func_ptr++)))
8983
if (have_min && (min_max_item->sum_func() == Item_sum::MIN_FUNC))
8984
min_functions->push_back(min_max_item);
8985
else if (have_max && (min_max_item->sum_func() == Item_sum::MAX_FUNC))
8986
max_functions->push_back(min_max_item);
8991
if (!(min_functions_it= new List_iterator<Item_sum>(*min_functions)))
8997
if (!(max_functions_it= new List_iterator<Item_sum>(*max_functions)))
9002
min_max_ranges.elements= 0;
9008
QUICK_GROUP_MIN_MAX_SELECT::~QUICK_GROUP_MIN_MAX_SELECT()
9010
if (file->inited != handler::NONE)
9011
file->ha_index_end();
9012
if (min_max_arg_part)
9013
delete_dynamic(&min_max_ranges);
9014
free_root(&alloc,MYF(0));
9015
delete min_functions_it;
9016
delete max_functions_it;
9017
delete quick_prefix_select;
9022
Eventually create and add a new quick range object.
9025
QUICK_GROUP_MIN_MAX_SELECT::add_range()
9026
sel_range Range object from which a
9029
Construct a new QUICK_RANGE object from a SEL_ARG object, and
9030
add it to the array min_max_ranges. If sel_arg is an infinite
9031
range, e.g. (x < 5 or x > 4), then skip it and do not construct
9039
bool QUICK_GROUP_MIN_MAX_SELECT::add_range(SEL_ARG *sel_range)
9042
uint32_t range_flag= sel_range->min_flag | sel_range->max_flag;
9044
/* Skip (-inf,+inf) ranges, e.g. (x < 5 or x > 4). */
9045
if ((range_flag & NO_MIN_RANGE) && (range_flag & NO_MAX_RANGE))
9048
if (!(sel_range->min_flag & NO_MIN_RANGE) &&
9049
!(sel_range->max_flag & NO_MAX_RANGE))
9051
if (sel_range->maybe_null &&
9052
sel_range->min_value[0] && sel_range->max_value[0])
9053
range_flag|= NULL_RANGE; /* IS NULL condition */
9054
else if (memcmp(sel_range->min_value, sel_range->max_value,
9055
min_max_arg_len) == 0)
9056
range_flag|= EQ_RANGE; /* equality condition */
9058
range= new QUICK_RANGE(sel_range->min_value, min_max_arg_len,
9059
make_keypart_map(sel_range->part),
9060
sel_range->max_value, min_max_arg_len,
9061
make_keypart_map(sel_range->part),
9065
if (insert_dynamic(&min_max_ranges, (unsigned char*)&range))
9072
Opens the ranges if there are more conditions in quick_prefix_select than
9073
the ones used for jumping through the prefixes.
9076
QUICK_GROUP_MIN_MAX_SELECT::adjust_prefix_ranges()
9079
quick_prefix_select is made over the conditions on the whole key.
9080
It defines a number of ranges of length x.
9081
However when jumping through the prefixes we use only the the first
9082
few most significant keyparts in the range key. However if there
9083
are more keyparts to follow the ones we are using we must make the
9084
condition on the key inclusive (because x < "ab" means
9085
x[0] < 'a' OR (x[0] == 'a' AND x[1] < 'b').
9086
To achive the above we must turn off the NEAR_MIN/NEAR_MAX
9088
void QUICK_GROUP_MIN_MAX_SELECT::adjust_prefix_ranges ()
9090
if (quick_prefix_select &&
9091
group_prefix_len < quick_prefix_select->max_used_key_length)
9096
for (inx= 0, arr= &quick_prefix_select->ranges; inx < arr->elements; inx++)
9100
get_dynamic(arr, (unsigned char*)&range, inx);
9101
range->flag &= ~(NEAR_MIN | NEAR_MAX);
9108
Determine the total number and length of the keys that will be used for
9112
QUICK_GROUP_MIN_MAX_SELECT::update_key_stat()
9115
The total length of the keys used for index lookup depends on whether
9116
there are any predicates referencing the min/max argument, and/or if
9117
the min/max argument field can be NULL.
9118
This function does an optimistic analysis whether the search key might
9119
be extended by a constant for the min/max keypart. It is 'optimistic'
9120
because during actual execution it may happen that a particular range
9121
is skipped, and then a shorter key will be used. However this is data
9122
dependent and can't be easily estimated here.
9128
void QUICK_GROUP_MIN_MAX_SELECT::update_key_stat()
9130
max_used_key_length= real_prefix_len;
9131
if (min_max_ranges.elements > 0)
9133
QUICK_RANGE *cur_range;
9135
{ /* Check if the right-most range has a lower boundary. */
9136
get_dynamic(&min_max_ranges, (unsigned char*)&cur_range,
9137
min_max_ranges.elements - 1);
9138
if (!(cur_range->flag & NO_MIN_RANGE))
9140
max_used_key_length+= min_max_arg_len;
9146
{ /* Check if the left-most range has an upper boundary. */
9147
get_dynamic(&min_max_ranges, (unsigned char*)&cur_range, 0);
9148
if (!(cur_range->flag & NO_MAX_RANGE))
9150
max_used_key_length+= min_max_arg_len;
9156
else if (have_min && min_max_arg_part &&
9157
min_max_arg_part->field->real_maybe_null())
9160
If a MIN/MAX argument value is NULL, we can quickly determine
9161
that we're in the beginning of the next group, because NULLs
9162
are always < any other value. This allows us to quickly
9163
determine the end of the current group and jump to the next
9164
group (see next_min()) and thus effectively increases the
9167
max_used_key_length+= min_max_arg_len;
9174
Initialize a quick group min/max select for key retrieval.
9177
QUICK_GROUP_MIN_MAX_SELECT::reset()
9180
Initialize the index chosen for access and find and store the prefix
9181
of the last group. The method is expensive since it performs disk access.
9188
int QUICK_GROUP_MIN_MAX_SELECT::reset(void)
9192
file->extra(HA_EXTRA_KEYREAD); /* We need only the key attributes */
9193
if ((result= file->ha_index_init(index,1)))
9195
if (quick_prefix_select && quick_prefix_select->reset())
9197
result= file->index_last(record);
9198
if (result == HA_ERR_END_OF_FILE)
9200
/* Save the prefix of the last group. */
9201
key_copy(last_prefix, record, index_info, group_prefix_len);
9209
Get the next key containing the MIN and/or MAX key for the next group.
9212
QUICK_GROUP_MIN_MAX_SELECT::get_next()
9215
The method finds the next subsequent group of records that satisfies the
9216
query conditions and finds the keys that contain the MIN/MAX values for
9217
the key part referenced by the MIN/MAX function(s). Once a group and its
9218
MIN/MAX values are found, store these values in the Item_sum objects for
9219
the MIN/MAX functions. The rest of the values in the result row are stored
9220
in the Item_field::result_field of each select field. If the query does
9221
not contain MIN and/or MAX functions, then the function only finds the
9222
group prefix, which is a query answer itself.
9225
If both MIN and MAX are computed, then we use the fact that if there is
9226
no MIN key, there can't be a MAX key as well, so we can skip looking
9227
for a MAX key in this case.
9231
HA_ERR_END_OF_FILE if returned all keys
9232
other if some error occurred
9235
int QUICK_GROUP_MIN_MAX_SELECT::get_next()
9240
int is_last_prefix= 0;
9243
Loop until a group is found that satisfies all query conditions or the last
9248
result= next_prefix();
9250
Check if this is the last group prefix. Notice that at this point
9251
this->record contains the current prefix in record format.
9255
is_last_prefix= key_cmp(index_info->key_part, last_prefix,
9257
assert(is_last_prefix <= 0);
9261
if (result == HA_ERR_KEY_NOT_FOUND)
9268
min_res= next_min();
9270
update_min_result();
9272
/* If there is no MIN in the group, there is no MAX either. */
9273
if ((have_max && !have_min) ||
9274
(have_max && have_min && (min_res == 0)))
9276
max_res= next_max();
9278
update_max_result();
9279
/* If a MIN was found, a MAX must have been found as well. */
9280
assert(((have_max && !have_min) ||
9281
(have_max && have_min && (max_res == 0))));
9284
If this is just a GROUP BY or DISTINCT without MIN or MAX and there
9285
are equality predicates for the key parts after the group, find the
9286
first sub-group with the extended prefix.
9288
if (!have_min && !have_max && key_infix_len > 0)
9289
result= file->index_read_map(record, group_prefix,
9290
make_prev_keypart_map(real_key_parts),
9293
result= have_min ? min_res : have_max ? max_res : result;
9294
} while ((result == HA_ERR_KEY_NOT_FOUND || result == HA_ERR_END_OF_FILE) &&
9295
is_last_prefix != 0);
9300
Partially mimic the behavior of end_select_send. Copy the
9301
field data from Item_field::field into Item_field::result_field
9302
of each non-aggregated field (the group fields, and optionally
9303
other fields in non-ANSI SQL mode).
9305
copy_fields(&join->tmp_table_param);
9307
else if (result == HA_ERR_KEY_NOT_FOUND)
9308
result= HA_ERR_END_OF_FILE;
9315
Retrieve the minimal key in the next group.
9318
QUICK_GROUP_MIN_MAX_SELECT::next_min()
9321
Find the minimal key within this group such that the key satisfies the query
9322
conditions and NULL semantics. The found key is loaded into this->record.
9325
Depending on the values of min_max_ranges.elements, key_infix_len, and
9326
whether there is a NULL in the MIN field, this function may directly
9327
return without any data access. In this case we use the key loaded into
9328
this->record by the call to this->next_prefix() just before this call.
9332
HA_ERR_KEY_NOT_FOUND if no MIN key was found that fulfills all conditions.
9333
HA_ERR_END_OF_FILE - "" -
9334
other if some error occurred
9337
int QUICK_GROUP_MIN_MAX_SELECT::next_min()
9341
/* Find the MIN key using the eventually extended group prefix. */
9342
if (min_max_ranges.elements > 0)
9344
if ((result= next_min_in_range()))
9349
/* Apply the constant equality conditions to the non-group select fields */
9350
if (key_infix_len > 0)
9352
if ((result= file->index_read_map(record, group_prefix,
9353
make_prev_keypart_map(real_key_parts),
9354
HA_READ_KEY_EXACT)))
9359
If the min/max argument field is NULL, skip subsequent rows in the same
9360
group with NULL in it. Notice that:
9361
- if the first row in a group doesn't have a NULL in the field, no row
9362
in the same group has (because NULL < any other value),
9363
- min_max_arg_part->field->ptr points to some place in 'record'.
9365
if (min_max_arg_part && min_max_arg_part->field->is_null())
9367
/* Find the first subsequent record without NULL in the MIN/MAX field. */
9368
key_copy(tmp_record, record, index_info, 0);
9369
result= file->index_read_map(record, tmp_record,
9370
make_keypart_map(real_key_parts),
9373
Check if the new record belongs to the current group by comparing its
9374
prefix with the group's prefix. If it is from the next group, then the
9375
whole group has NULLs in the MIN/MAX field, so use the first record in
9376
the group as a result.
9378
It is possible to reuse this new record as the result candidate for the
9379
next call to next_min(), and to save one lookup in the next call. For
9380
this add a new member 'this->next_group_prefix'.
9384
if (key_cmp(index_info->key_part, group_prefix, real_prefix_len))
9385
key_restore(record, tmp_record, index_info, 0);
9387
else if (result == HA_ERR_KEY_NOT_FOUND || result == HA_ERR_END_OF_FILE)
9388
result= 0; /* There is a result in any case. */
9393
If the MIN attribute is non-nullable, this->record already contains the
9394
MIN key in the group, so just return.
9401
Retrieve the maximal key in the next group.
9404
QUICK_GROUP_MIN_MAX_SELECT::next_max()
9407
Lookup the maximal key of the group, and store it into this->record.
9411
HA_ERR_KEY_NOT_FOUND if no MAX key was found that fulfills all conditions.
9412
HA_ERR_END_OF_FILE - "" -
9413
other if some error occurred
9416
int QUICK_GROUP_MIN_MAX_SELECT::next_max()
9420
/* Get the last key in the (possibly extended) group. */
9421
if (min_max_ranges.elements > 0)
9422
result= next_max_in_range();
9424
result= file->index_read_map(record, group_prefix,
9425
make_prev_keypart_map(real_key_parts),
9426
HA_READ_PREFIX_LAST);
9432
Determine the prefix of the next group.
9435
QUICK_GROUP_MIN_MAX_SELECT::next_prefix()
9438
Determine the prefix of the next group that satisfies the query conditions.
9439
If there is a range condition referencing the group attributes, use a
9440
QUICK_RANGE_SELECT object to retrieve the *first* key that satisfies the
9441
condition. If there is a key infix of constants, append this infix
9442
immediately after the group attributes. The possibly extended prefix is
9443
stored in this->group_prefix. The first key of the found group is stored in
9444
this->record, on which relies this->next_min().
9448
HA_ERR_KEY_NOT_FOUND if there is no key with the formed prefix
9449
HA_ERR_END_OF_FILE if there are no more keys
9450
other if some error occurred
9452
int QUICK_GROUP_MIN_MAX_SELECT::next_prefix()
9456
if (quick_prefix_select)
9458
unsigned char *cur_prefix= seen_first_key ? group_prefix : NULL;
9459
if ((result= quick_prefix_select->get_next_prefix(group_prefix_len,
9460
make_prev_keypart_map(group_key_parts), cur_prefix)))
9462
seen_first_key= true;
9466
if (!seen_first_key)
9468
result= file->index_first(record);
9471
seen_first_key= true;
9475
/* Load the first key in this group into record. */
9476
result= file->index_read_map(record, group_prefix,
9477
make_prev_keypart_map(group_key_parts),
9484
/* Save the prefix of this group for subsequent calls. */
9485
key_copy(group_prefix, record, index_info, group_prefix_len);
9486
/* Append key_infix to group_prefix. */
9487
if (key_infix_len > 0)
9488
memcpy(group_prefix + group_prefix_len,
9489
key_infix, key_infix_len);
9496
Find the minimal key in a group that satisfies some range conditions for the
9497
min/max argument field.
9500
QUICK_GROUP_MIN_MAX_SELECT::next_min_in_range()
9503
Given the sequence of ranges min_max_ranges, find the minimal key that is
9504
in the left-most possible range. If there is no such key, then the current
9505
group does not have a MIN key that satisfies the WHERE clause. If a key is
9506
found, its value is stored in this->record.
9510
HA_ERR_KEY_NOT_FOUND if there is no key with the given prefix in any of
9512
HA_ERR_END_OF_FILE - "" -
9516
int QUICK_GROUP_MIN_MAX_SELECT::next_min_in_range()
9518
ha_rkey_function find_flag;
9519
key_part_map keypart_map;
9520
QUICK_RANGE *cur_range;
9521
bool found_null= false;
9522
int result= HA_ERR_KEY_NOT_FOUND;
9523
basic_string<unsigned char> max_key;
9525
max_key.reserve(real_prefix_len + min_max_arg_len);
9527
assert(min_max_ranges.elements > 0);
9529
for (uint32_t range_idx= 0; range_idx < min_max_ranges.elements; range_idx++)
9530
{ /* Search from the left-most range to the right. */
9531
get_dynamic(&min_max_ranges, (unsigned char*)&cur_range, range_idx);
9534
If the current value for the min/max argument is bigger than the right
9535
boundary of cur_range, there is no need to check this range.
9537
if (range_idx != 0 && !(cur_range->flag & NO_MAX_RANGE) &&
9538
(key_cmp(min_max_arg_part, (const unsigned char*) cur_range->max_key,
9539
min_max_arg_len) == 1))
9542
if (cur_range->flag & NO_MIN_RANGE)
9544
keypart_map= make_prev_keypart_map(real_key_parts);
9545
find_flag= HA_READ_KEY_EXACT;
9549
/* Extend the search key with the lower boundary for this range. */
9550
memcpy(group_prefix + real_prefix_len, cur_range->min_key,
9551
cur_range->min_length);
9552
keypart_map= make_keypart_map(real_key_parts);
9553
find_flag= (cur_range->flag & (EQ_RANGE | NULL_RANGE)) ?
9554
HA_READ_KEY_EXACT : (cur_range->flag & NEAR_MIN) ?
9555
HA_READ_AFTER_KEY : HA_READ_KEY_OR_NEXT;
9558
result= file->index_read_map(record, group_prefix, keypart_map, find_flag);
9561
if ((result == HA_ERR_KEY_NOT_FOUND || result == HA_ERR_END_OF_FILE) &&
9562
(cur_range->flag & (EQ_RANGE | NULL_RANGE)))
9563
continue; /* Check the next range. */
9566
In all other cases (HA_ERR_*, HA_READ_KEY_EXACT with NO_MIN_RANGE,
9567
HA_READ_AFTER_KEY, HA_READ_KEY_OR_NEXT) if the lookup failed for this
9568
range, it can't succeed for any other subsequent range.
9573
/* A key was found. */
9574
if (cur_range->flag & EQ_RANGE)
9575
break; /* No need to perform the checks below for equal keys. */
9577
if (cur_range->flag & NULL_RANGE)
9580
Remember this key, and continue looking for a non-NULL key that
9581
satisfies some other condition.
9583
memcpy(tmp_record, record, head->s->rec_buff_length);
9588
/* Check if record belongs to the current group. */
9589
if (key_cmp(index_info->key_part, group_prefix, real_prefix_len))
9591
result= HA_ERR_KEY_NOT_FOUND;
9595
/* If there is an upper limit, check if the found key is in the range. */
9596
if ( !(cur_range->flag & NO_MAX_RANGE) )
9598
/* Compose the MAX key for the range. */
9600
max_key.append(group_prefix, real_prefix_len);
9601
max_key.append(cur_range->max_key, cur_range->max_length);
9602
/* Compare the found key with max_key. */
9603
int cmp_res= key_cmp(index_info->key_part,
9605
real_prefix_len + min_max_arg_len);
9606
if (!(((cur_range->flag & NEAR_MAX) && (cmp_res == -1)) ||
9609
result= HA_ERR_KEY_NOT_FOUND;
9613
/* If we got to this point, the current key qualifies as MIN. */
9614
assert(result == 0);
9618
If there was a key with NULL in the MIN/MAX field, and there was no other
9619
key without NULL from the same group that satisfies some other condition,
9620
then use the key with the NULL.
9622
if (found_null && result)
9624
memcpy(record, tmp_record, head->s->rec_buff_length);
9632
Find the maximal key in a group that satisfies some range conditions for the
9633
min/max argument field.
9636
QUICK_GROUP_MIN_MAX_SELECT::next_max_in_range()
9639
Given the sequence of ranges min_max_ranges, find the maximal key that is
9640
in the right-most possible range. If there is no such key, then the current
9641
group does not have a MAX key that satisfies the WHERE clause. If a key is
9642
found, its value is stored in this->record.
9646
HA_ERR_KEY_NOT_FOUND if there is no key with the given prefix in any of
9648
HA_ERR_END_OF_FILE - "" -
9652
int QUICK_GROUP_MIN_MAX_SELECT::next_max_in_range()
9654
ha_rkey_function find_flag;
9655
key_part_map keypart_map;
9656
QUICK_RANGE *cur_range;
9658
basic_string<unsigned char> min_key;
9659
min_key.reserve(real_prefix_len + min_max_arg_len);
9661
assert(min_max_ranges.elements > 0);
9663
for (uint32_t range_idx= min_max_ranges.elements; range_idx > 0; range_idx--)
9664
{ /* Search from the right-most range to the left. */
9665
get_dynamic(&min_max_ranges, (unsigned char*)&cur_range, range_idx - 1);
9668
If the current value for the min/max argument is smaller than the left
9669
boundary of cur_range, there is no need to check this range.
9671
if (range_idx != min_max_ranges.elements &&
9672
!(cur_range->flag & NO_MIN_RANGE) &&
9673
(key_cmp(min_max_arg_part, (const unsigned char*) cur_range->min_key,
9674
min_max_arg_len) == -1))
9677
if (cur_range->flag & NO_MAX_RANGE)
9679
keypart_map= make_prev_keypart_map(real_key_parts);
9680
find_flag= HA_READ_PREFIX_LAST;
9684
/* Extend the search key with the upper boundary for this range. */
9685
memcpy(group_prefix + real_prefix_len, cur_range->max_key,
9686
cur_range->max_length);
9687
keypart_map= make_keypart_map(real_key_parts);
9688
find_flag= (cur_range->flag & EQ_RANGE) ?
9689
HA_READ_KEY_EXACT : (cur_range->flag & NEAR_MAX) ?
9690
HA_READ_BEFORE_KEY : HA_READ_PREFIX_LAST_OR_PREV;
9693
result= file->index_read_map(record, group_prefix, keypart_map, find_flag);
9697
if ((result == HA_ERR_KEY_NOT_FOUND || result == HA_ERR_END_OF_FILE) &&
9698
(cur_range->flag & EQ_RANGE))
9699
continue; /* Check the next range. */
9702
In no key was found with this upper bound, there certainly are no keys
9703
in the ranges to the left.
9707
/* A key was found. */
9708
if (cur_range->flag & EQ_RANGE)
9709
return 0; /* No need to perform the checks below for equal keys. */
9711
/* Check if record belongs to the current group. */
9712
if (key_cmp(index_info->key_part, group_prefix, real_prefix_len))
9713
continue; // Row not found
9715
/* If there is a lower limit, check if the found key is in the range. */
9716
if ( !(cur_range->flag & NO_MIN_RANGE) )
9718
/* Compose the MIN key for the range. */
9720
min_key.append(group_prefix, real_prefix_len);
9721
min_key.append(cur_range->min_key, cur_range->min_length);
9723
/* Compare the found key with min_key. */
9724
int cmp_res= key_cmp(index_info->key_part,
9726
real_prefix_len + min_max_arg_len);
9727
if (!(((cur_range->flag & NEAR_MIN) && (cmp_res == 1)) ||
9731
/* If we got to this point, the current key qualifies as MAX. */
9734
return HA_ERR_KEY_NOT_FOUND;
9739
Update all MIN function results with the newly found value.
9742
QUICK_GROUP_MIN_MAX_SELECT::update_min_result()
9745
The method iterates through all MIN functions and updates the result value
9746
of each function by calling Item_sum::reset(), which in turn picks the new
9747
result value from this->head->record[0], previously updated by
9748
next_min(). The updated value is stored in a member variable of each of the
9749
Item_sum objects, depending on the value type.
9752
The update must be done separately for MIN and MAX, immediately after
9753
next_min() was called and before next_max() is called, because both MIN and
9754
MAX take their result value from the same buffer this->head->record[0]
9755
(i.e. this->record).
9761
void QUICK_GROUP_MIN_MAX_SELECT::update_min_result()
9765
min_functions_it->rewind();
9766
while ((min_func= (*min_functions_it)++))
9772
Update all MAX function results with the newly found value.
9775
QUICK_GROUP_MIN_MAX_SELECT::update_max_result()
9778
The method iterates through all MAX functions and updates the result value
9779
of each function by calling Item_sum::reset(), which in turn picks the new
9780
result value from this->head->record[0], previously updated by
9781
next_max(). The updated value is stored in a member variable of each of the
9782
Item_sum objects, depending on the value type.
9785
The update must be done separately for MIN and MAX, immediately after
9786
next_max() was called, because both MIN and MAX take their result value
9787
from the same buffer this->head->record[0] (i.e. this->record).
9793
void QUICK_GROUP_MIN_MAX_SELECT::update_max_result()
9797
max_functions_it->rewind();
9798
while ((max_func= (*max_functions_it)++))
9804
Append comma-separated list of keys this quick select uses to key_names;
9805
append comma-separated list of corresponding used lengths to used_lengths.
9808
QUICK_GROUP_MIN_MAX_SELECT::add_keys_and_lengths()
9809
key_names [out] Names of used indexes
9810
used_lengths [out] Corresponding lengths of the index names
9813
This method is used by select_describe to extract the names of the
9814
indexes used by a quick select.
9818
void QUICK_GROUP_MIN_MAX_SELECT::add_keys_and_lengths(String *key_names,
9819
String *used_lengths)
9823
key_names->append(index_info->name);
9824
length= int64_t2str(max_used_key_length, buf, 10) - buf;
9825
used_lengths->append(buf, length);
9828
static void print_sel_tree(PARAM *param, SEL_TREE *tree, key_map *tree_map, const char *)
9830
SEL_ARG **key,**end;
9834
String tmp(buff,sizeof(buff),&my_charset_bin);
9836
for (idx= 0,key=tree->keys, end=key+param->keys ;
9840
if (tree_map->test(idx))
9842
uint32_t keynr= param->real_keynr[idx];
9845
tmp.append(param->table->key_info[keynr].name);
9849
tmp.append(STRING_WITH_LEN("(empty)"));
9853
static void print_ror_scans_arr(Table *table,
9854
const char *, struct st_ror_scan_info **start,
9855
struct st_ror_scan_info **end)
9858
String tmp(buff,sizeof(buff),&my_charset_bin);
9860
for (;start != end; start++)
9864
tmp.append(table->key_info[(*start)->keynr].name);
9867
tmp.append(STRING_WITH_LEN("(empty)"));
9870
/*****************************************************************************
9871
** Instantiate templates
9872
*****************************************************************************/
9874
#ifdef HAVE_EXPLICIT_TEMPLATE_INSTANTIATION
9875
template class List<QUICK_RANGE>;
9876
template class List_iterator<QUICK_RANGE>;