26
26
This file contains:
29
A module that accepts a condition, index (or partitioning) description,
30
and builds lists of intervals (in index/partitioning space), such that
31
all possible records that match the condition are contained within the
29
A module that accepts a condition, index (or partitioning) description,
30
and builds lists of intervals (in index/partitioning space), such that
31
all possible records that match the condition are contained within the
33
33
The entry point for the range analysis module is get_mm_tree() function.
35
35
The lists are returned in form of complicated structure of interlinked
36
36
SEL_TREE/SEL_IMERGE/SEL_ARG objects.
37
See quick_range_seq_next, find_used_partitions for examples of how to walk
37
See quick_range_seq_next, find_used_partitions for examples of how to walk
39
39
All direct "users" of this module are located within this file, too.
65
65
The code in this file (and elsewhere) makes operations on key value tuples.
66
66
Those tuples are stored in the following format:
68
68
The tuple is a sequence of key part values. The length of key part value
69
69
depends only on its type (and not depends on the what value is stored)
71
71
KeyTuple: keypart1-data, keypart2-data, ...
73
73
The value of each keypart is stored in the following format:
75
75
keypart_data: [isnull_byte] keypart-value-bytes
77
77
If a keypart may have a NULL value (key_part->field->real_maybe_null() can
78
be used to check this), then the first byte is a NULL indicator with the
78
be used to check this), then the first byte is a NULL indicator with the
79
79
following valid values:
80
80
1 - keypart has NULL value.
81
81
0 - keypart has non-NULL value.
99
99
See key_copy() and key_restore() for code to move data between index tuple
102
CAUTION: the above description is only sergefp's understanding of the
102
CAUTION: the above description is only sergefp's understanding of the
103
103
subject and may omit some details.
106
106
#include <drizzled/server_includes.h>
107
107
#include <drizzled/sql_select.h>
110
#define test_rb_tree(A,B) {}
111
#define test_use_count(A) {}
108
#include <drizzled/error.h>
109
#include <drizzled/cost_vect.h>
110
#include <drizzled/item/cmpfunc.h>
111
#include <drizzled/field/num.h>
112
#include <drizzled/check_stack_overrun.h>
114
#include "drizzled/temporal.h" /* Needed in get_mm_leaf() for timestamp -> datetime comparisons */
115
122
Convert double value to #rows. Currently this does floor(), and we
116
123
might consider using round() instead.
118
#define double2rows(x) ((ha_rows)(x))
125
static inline ha_rows double2rows(double x)
127
return static_cast<ha_rows>(x);
120
130
static int sel_cmp(Field *f,unsigned char *a,unsigned char *b,uint8_t a_flag,uint8_t b_flag);
161
171
+-------+ $ $ +--------+
163
173
... $ $ +--------+
165
175
The entire graph is partitioned into "interval lists".
167
177
An interval list is a sequence of ordered disjoint intervals over the same
168
178
key part. SEL_ARG are linked via "next" and "prev" pointers. Additionally,
169
all intervals in the list form an RB-tree, linked via left/right/parent
179
all intervals in the list form an RB-tree, linked via left/right/parent
170
180
pointers. The RB-tree root SEL_ARG object will be further called "root of the
173
In the example pic, there are 4 interval lists:
183
In the example pic, there are 4 interval lists:
174
184
"kp<1 OR kp1=2 OR kp1=3", "kp2=5", "kp3=10 OR kp3=12", "kp3=11 OR kp3=13".
175
185
The vertical lines represent SEL_ARG::next/prev pointers.
177
187
In an interval list, each member X may have SEL_ARG::next_key_part pointer
178
188
pointing to the root of another interval list Y. The pointed interval list
179
189
must cover a key part with greater number (i.e. Y->part > X->part).
181
191
In the example pic, the next_key_part pointers are represented by
182
192
horisontal lines.
208
218
and create QUICK_RANGE_SELECT object that will
209
219
read records within these intervals.
211
4. SPACE COMPLEXITY NOTES
221
4. SPACE COMPLEXITY NOTES
213
223
SEL_ARG graph is a representation of an ordered disjoint sequence of
214
224
intervals over the ordered set of index tuple values.
216
226
For multi-part keys, one can construct a WHERE expression such that its
217
227
list of intervals will be of combinatorial size. Here is an example:
219
(keypart1 IN (1,2, ..., n1)) AND
220
(keypart2 IN (1,2, ..., n2)) AND
229
(keypart1 IN (1,2, ..., n1)) AND
230
(keypart2 IN (1,2, ..., n2)) AND
221
231
(keypart3 IN (1,2, ..., n3))
223
233
For this WHERE clause the list of intervals will have n1*n2*n3 intervals
226
236
(keypart1, keypart2, keypart3) = (k1, k2, k3), where 1 <= k{i} <= n{i}
228
238
SEL_ARG graph structure aims to reduce the amount of required space by
229
239
"sharing" the elementary intervals when possible (the pic at the
230
beginning of this comment has examples of such sharing). The sharing may
240
beginning of this comment has examples of such sharing). The sharing may
231
241
prevent combinatorial blowup:
233
243
There are WHERE clauses that have combinatorial-size interval lists but
234
244
will be represented by a compact SEL_ARG graph.
236
(keypartN IN (1,2, ..., n1)) AND
246
(keypartN IN (1,2, ..., n1)) AND
238
(keypart2 IN (1,2, ..., n2)) AND
248
(keypart2 IN (1,2, ..., n2)) AND
239
249
(keypart1 IN (1,2, ..., n3))
241
251
but not in all cases:
255
265
By induction: Let's take any interval on some keypart in the middle:
259
269
Then let's AND it with this interval 'structure' from preceding and
260
270
following keyparts:
262
272
(kp14=c1 AND kp16=c3) OR keypart14=c2) (*)
264
274
We will obtain this SEL_ARG graph:
266
276
kp14 $ kp15 $ kp16
268
278
+---------+ $ +---------+ $ +---------+
269
279
| kp14=c1 |--$-->| kp15=c0 |--$-->| kp16=c3 |
270
280
+---------+ $ +---------+ $ +---------+
272
+---------+ $ +---------+ $
273
| kp14=c2 |--$-->| kp15=c0 | $
274
+---------+ $ +---------+ $
282
+---------+ $ +---------+ $
283
| kp14=c2 |--$-->| kp15=c0 | $
284
+---------+ $ +---------+ $
277
287
Note that we had to duplicate "kp15=c0" and there was no way to avoid
279
289
The induction step: AND the obtained expression with another "wrapping"
280
290
expression like (*).
281
When the process ends because of the limit on max. number of keyparts
291
When the process ends because of the limit on max. number of keyparts
284
294
WHERE clause length is O(3*#max_keyparts)
1072
bool SQL_SELECT::check_quick(Session *session, bool force_quick_range,
1077
return test_quick_select(session, tmp, 0, limit,
1078
force_quick_range, false) < 0;
1082
bool SQL_SELECT::skip_record()
1084
return cond ? cond->val_int() == 0 : 0;
1061
1088
QUICK_SELECT_I::QUICK_SELECT_I()
1062
1089
:max_used_key_length(0),
1063
1090
used_key_parts(0)
1066
QUICK_RANGE_SELECT::QUICK_RANGE_SELECT(THD *thd, Table *table, uint32_t key_nr,
1067
bool no_alloc, MEM_ROOT *parent_alloc,
1093
QUICK_RANGE_SELECT::QUICK_RANGE_SELECT(Session *session, Table *table, uint32_t key_nr,
1094
bool no_alloc, MEM_ROOT *parent_alloc)
1069
1095
:free_file(0),cur_range(NULL),last_range(0),dont_free(0)
1071
my_bitmap_map *bitmap;
1073
1097
in_ror_merged_scan= 0;
1077
1101
key_part_info= head->key_info[index].key_part;
1078
1102
my_init_dynamic_array(&ranges, sizeof(QUICK_RANGE*), 16, 16);
1080
/* 'thd' is not accessible in QUICK_RANGE_SELECT::reset(). */
1081
mrr_buf_size= thd->variables.read_rnd_buff_size;
1104
/* 'session' is not accessible in QUICK_RANGE_SELECT::reset(). */
1105
mrr_buf_size= session->variables.read_rnd_buff_size;
1082
1106
mrr_buf_desc= NULL;
1084
1108
if (!no_alloc && !parent_alloc)
1086
1110
// Allocates everything through the internal memroot
1087
init_sql_alloc(&alloc, thd->variables.range_alloc_block_size, 0);
1088
thd->mem_root= &alloc;
1111
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1112
session->mem_root= &alloc;
1091
1115
memset(&alloc, 0, sizeof(alloc));
1206
QUICK_ROR_INTERSECT_SELECT::QUICK_ROR_INTERSECT_SELECT(THD *thd_param,
1220
QUICK_ROR_INTERSECT_SELECT::QUICK_ROR_INTERSECT_SELECT(Session *session_param,
1208
1222
bool retrieve_full_rows,
1209
1223
MEM_ROOT *parent_alloc)
1210
: cpk_quick(NULL), thd(thd_param), need_to_fetch_row(retrieve_full_rows),
1224
: cpk_quick(NULL), session(session_param), need_to_fetch_row(retrieve_full_rows),
1211
1225
scans_inited(false)
1213
1227
index= MAX_KEY;
1215
1229
record= head->record[0];
1216
1230
if (!parent_alloc)
1217
init_sql_alloc(&alloc, thd->variables.range_alloc_block_size, 0);
1231
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1219
1233
memset(&alloc, 0, sizeof(MEM_ROOT));
1220
1234
last_rowid= (unsigned char*) alloc_root(parent_alloc? parent_alloc : &alloc,
1283
1297
/* already have own 'handler' object. */
1288
if (!(file= head->file->clone(thd->mem_root)))
1301
session= head->in_use;
1302
if (!(file= head->file->clone(session->mem_root)))
1291
1305
Manually set the error flag. Note: there seems to be quite a few
1292
1306
places where a failure could cause the server to "hang" the client by
1293
sending no response to a query. ATM those are not real errors because
1294
the storage engine calls in question happen to never fail with the
1295
existing storage engines.
1307
sending no response to a query. ATM those are not real errors because
1308
the storage engine calls in question happen to never fail with the
1309
existing storage engines.
1297
1311
my_error(ER_OUT_OF_RESOURCES, MYF(0)); /* purecov: inspected */
1298
1312
/* Caller will free the memory */
1445
QUICK_ROR_UNION_SELECT::QUICK_ROR_UNION_SELECT(THD *thd_param,
1465
QUICK_ROR_UNION_SELECT::QUICK_ROR_UNION_SELECT(Session *session_param,
1447
: thd(thd_param), scans_inited(false)
1467
: session(session_param), scans_inited(false)
1449
1469
index= MAX_KEY;
1451
1471
rowid_length= table->file->ref_length;
1452
1472
record= head->record[0];
1453
init_sql_alloc(&alloc, thd->variables.range_alloc_block_size, 0);
1454
thd_param->mem_root= &alloc;
1473
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
1474
session_param->mem_root= &alloc;
1478
* Function object that is used as the comparison function
1479
* for the priority queue in the QUICK_ROR_UNION_SELECT
1482
class compare_functor
1484
QUICK_ROR_UNION_SELECT *self;
1486
compare_functor(QUICK_ROR_UNION_SELECT *in_arg)
1488
inline bool operator()(const QUICK_SELECT_I *i, const QUICK_SELECT_I *j) const
1490
int val= self->head->file->cmp_ref(i->last_rowid,
1459
1497
Do post-constructor initialization.
1882
1917
/* Table read plans are allocated on MEM_ROOT and are never deleted */
1883
1918
static void *operator new(size_t size, MEM_ROOT *mem_root)
1884
{ return (void*) alloc_root(mem_root, (uint) size); }
1885
static void operator delete(void *ptr __attribute__((unused)),
1886
size_t size __attribute__((unused)))
1919
{ return (void*) alloc_root(mem_root, (uint32_t) size); }
1920
static void operator delete(void *, size_t)
1887
1921
{ TRASH(ptr, size); }
1888
static void operator delete(void *ptr __attribute__((unused)),
1889
MEM_ROOT *mem_root __attribute__((unused)))
1922
static void operator delete(void *, MEM_ROOT *)
1890
1923
{ /* Never called */ }
1891
1924
virtual ~TABLE_READ_PLAN() {} /* Remove gcc warning */
2054
2085
static int fill_used_fields_bitmap(PARAM *param)
2056
2087
Table *table= param->table;
2059
param->tmp_covered_fields.bitmap= 0;
2060
2089
param->fields_bitmap_size= table->s->column_bitmap_size;
2061
if (!(tmp= (my_bitmap_map*) alloc_root(param->mem_root,
2062
param->fields_bitmap_size)) ||
2063
bitmap_init(¶m->needed_fields, tmp, table->s->fields, false))
2066
bitmap_copy(¶m->needed_fields, table->read_set);
2067
bitmap_union(¶m->needed_fields, table->write_set);
2091
param->needed_fields = *(table->read_set);
2092
param->needed_fields |= *(table->write_set);
2069
2094
pk= param->table->s->primary_key;
2070
2095
if (pk != MAX_KEY && param->table->file->primary_key_is_clustered())
2107
2132
quick_condition_rows value is obtained as follows:
2109
2134
It is a minimum of E(#output rows) for all considered table access
2110
2135
methods (range and index_merge accesses over various indexes).
2112
2137
The obtained value is not a true E(#rows that satisfy table condition)
2113
2138
but rather a pessimistic estimate. To obtain a true E(#...) one would
2114
2139
need to combine estimates of various access methods, taking into account
2115
2140
correlations between sets of rows they will return.
2117
2142
For example, if values of tbl.key1 and tbl.key2 are independent (a right
2118
2143
assumption if we have no information about their correlation) then the
2119
2144
correct estimate will be:
2121
E(#rows("tbl.key1 < c1 AND tbl.key2 < c2")) =
2146
E(#rows("tbl.key1 < c1 AND tbl.key2 < c2")) =
2122
2147
= E(#rows(tbl.key1 < c1)) / total_rows(tbl) * E(#rows(tbl.key2 < c2)
2124
which is smaller than
2149
which is smaller than
2126
2151
MIN(E(#rows(tbl.key1 < c1), E(#rows(tbl.key2 < c2)))
2128
2153
which is currently produced.
2131
2156
* Change the value returned in quick_condition_rows from a pessimistic
2132
estimate to true E(#rows that satisfy table condition).
2133
(we can re-use some of E(#rows) calcuation code from index_merge/intersection
2157
estimate to true E(#rows that satisfy table condition).
2158
(we can re-use some of E(#rows) calcuation code from index_merge/intersection
2136
2161
* Check if this function really needs to modify keys_to_use, and change the
2137
2162
code to pass it by reference if it doesn't.
2192
2217
param.table=head;
2194
2219
param.mem_root= &alloc;
2195
param.old_root= thd->mem_root;
2220
param.old_root= session->mem_root;
2196
2221
param.needed_reg= &needed_reg;
2197
2222
param.imerge_cost_buff_size= 0;
2198
2223
param.using_real_indexes= true;
2199
2224
param.remove_jump_scans= true;
2200
2225
param.force_default_mrr= ordered_output;
2202
thd->no_errors=1; // Don't warn about NULL
2203
init_sql_alloc(&alloc, thd->variables.range_alloc_block_size, 0);
2227
session->no_errors=1; // Don't warn about NULL
2228
init_sql_alloc(&alloc, session->variables.range_alloc_block_size, 0);
2204
2229
if (!(param.key_parts= (KEY_PART*) alloc_root(&alloc,
2205
2230
sizeof(KEY_PART)*
2206
2231
head->s->key_parts)) ||
2207
2232
fill_used_fields_bitmap(¶m))
2234
session->no_errors=0;
2210
2235
free_root(&alloc,MYF(0)); // Return memory & allocator
2211
return(0); // Can't use range
2236
return 0; // Can't use range
2213
2238
key_parts= param.key_parts;
2214
thd->mem_root= &alloc;
2239
session->mem_root= &alloc;
2217
2242
Make an array with description of all key parts of all table keys.
2558
2583
unique_calc_buff_size=
2559
2584
Unique::get_cost_calc_buff_size((ulong)non_cpk_scan_records,
2560
2585
param->table->file->ref_length,
2561
param->thd->variables.sortbuff_size);
2586
param->session->variables.sortbuff_size);
2562
2587
if (param->imerge_cost_buff_size < unique_calc_buff_size)
2564
2589
if (!(param->imerge_cost_buff= (uint*)alloc_root(param->mem_root,
2565
2590
unique_calc_buff_size)))
2567
2592
param->imerge_cost_buff_size= unique_calc_buff_size;
2571
Unique::get_use_cost(param->imerge_cost_buff, (uint)non_cpk_scan_records,
2596
Unique::get_use_cost(param->imerge_cost_buff, (uint32_t)non_cpk_scan_records,
2572
2597
param->table->file->ref_length,
2573
param->thd->variables.sortbuff_size);
2598
param->session->variables.sortbuff_size);
2574
2599
if (imerge_cost < read_time)
2576
2601
if ((imerge_trp= new (param->mem_root)TRP_INDEX_MERGE))
2747
2772
if (!(bitmap_buf= (my_bitmap_map*) alloc_root(param->mem_root,
2748
2773
param->fields_bitmap_size)))
2751
if (bitmap_init(&ror_scan->covered_fields, bitmap_buf,
2752
param->table->s->fields, false))
2754
bitmap_clear_all(&ror_scan->covered_fields);
2776
ror_scan->covered_fields.reset();
2756
2778
KEY_PART_INFO *key_part= param->table->key_info[keynr].key_part;
2757
2779
KEY_PART_INFO *key_part_end= key_part +
2758
2780
param->table->key_info[keynr].key_parts;
2759
2781
for (;key_part != key_part_end; ++key_part)
2761
if (bitmap_is_set(¶m->needed_fields, key_part->fieldnr-1))
2762
bitmap_set_bit(&ror_scan->covered_fields, key_part->fieldnr-1);
2783
if (param->needed_fields.test(key_part->fieldnr-1))
2784
ror_scan->covered_fields.set(key_part->fieldnr-1);
2764
2786
double rows= rows2double(param->table->quick_rows[ror_scan->keynr]);
2765
2787
ror_scan->index_read_cost=
2859
2881
ROR_INTERSECT_INFO* ror_intersect_init(const PARAM *param)
2861
2883
ROR_INTERSECT_INFO *info;
2863
2884
if (!(info= (ROR_INTERSECT_INFO*)alloc_root(param->mem_root,
2864
2885
sizeof(ROR_INTERSECT_INFO))))
2866
2887
info->param= param;
2867
if (!(buf= (my_bitmap_map*) alloc_root(param->mem_root,
2868
param->fields_bitmap_size)))
2870
if (bitmap_init(&info->covered_fields, buf, param->table->s->fields,
2873
2888
info->is_covering= false;
2874
2889
info->index_scan_costs= 0.0;
2875
2890
info->index_records= 0;
2876
2891
info->out_rows= (double) param->table->file->stats.records;
2877
bitmap_clear_all(&info->covered_fields);
2892
info->covered_fields.reset();
2881
2896
void ror_intersect_cpy(ROR_INTERSECT_INFO *dst, const ROR_INTERSECT_INFO *src)
2883
2898
dst->param= src->param;
2884
memcpy(dst->covered_fields.bitmap, src->covered_fields.bitmap,
2885
no_bytes_in_map(&src->covered_fields));
2899
dst->covered_fields= src->covered_fields;
2886
2900
dst->out_rows= src->out_rows;
2887
2901
dst->is_covering= src->is_covering;
2888
2902
dst->index_records= src->index_records;
3098
3110
if (selectivity_mult == 1.0)
3100
3112
/* Don't add this scan if it doesn't improve selectivity. */
3104
3116
info->out_rows *= selectivity_mult;
3106
3118
if (is_cpk_scan)
3109
CPK scan is used to filter out rows. We apply filtering for
3121
CPK scan is used to filter out rows. We apply filtering for
3110
3122
each record of every scan. Assuming 1/TIME_FOR_COMPARE_ROWID
3111
3123
per check this gives us:
3113
info->index_scan_costs += rows2double(info->index_records) /
3125
info->index_scan_costs += rows2double(info->index_records) /
3114
3126
TIME_FOR_COMPARE_ROWID;
3118
3130
info->index_records += info->param->table->quick_rows[ror_scan->keynr];
3119
3131
info->index_scan_costs += ror_scan->index_read_cost;
3120
bitmap_union(&info->covered_fields, &ror_scan->covered_fields);
3121
if (!info->is_covering && bitmap_is_subset(&info->param->needed_fields,
3122
&info->covered_fields))
3132
info->covered_fields |= ror_scan->covered_fields;
3133
if (! info->is_covering &&
3134
((info->covered_fields & info->param->needed_fields) == info->param->needed_fields))
3124
3136
info->is_covering= true;
3453
3457
total_cost += (*ror_scan_mark)->index_read_cost;
3454
3458
records += (*ror_scan_mark)->records;
3455
3459
if (total_cost > read_time)
3457
3461
/* F=F-covered by first(I) */
3458
bitmap_union(covered_fields, &(*ror_scan_mark)->covered_fields);
3459
all_covered= bitmap_is_subset(¶m->needed_fields, covered_fields);
3462
*covered_fields|= (*ror_scan_mark)->covered_fields;
3464
* Check whether the param->needed_fields bitset is a subset of
3465
* the covered_fields bitset. If the param->needed_fields bitset
3466
* is a subset of covered_fields, then set all_covered to
3467
* true; otherwise, set it to false.
3469
all_covered= ((*covered_fields & param->needed_fields) == param->needed_fields);
3460
3470
} while ((++ror_scan_mark < ror_scans_end) && !all_covered);
3462
3472
if (!all_covered || (ror_scan_mark - tree->ror_scans) == 1)
3466
3476
Ok, [tree->ror_scans .. ror_scan) holds covering index_intersection with
3841
3847
#define NOT_IN_IGNORE_THRESHOLD 1000
3842
3848
MEM_ROOT *tmp_root= param->mem_root;
3843
param->thd->mem_root= param->old_root;
3849
param->session->mem_root= param->old_root;
3845
3851
Create one Item_type constant object. We'll need it as
3846
3852
get_mm_parts only accepts constant values wrapped in Item_Type
3848
3854
We create the Item on param->mem_root which points to
3849
per-statement mem_root (while thd->mem_root is currently pointing
3855
per-statement mem_root (while session->mem_root is currently pointing
3850
3856
to mem_root local to range optimizer).
3852
3858
Item *value_item= func->array->create_item();
3853
param->thd->mem_root= tmp_root;
3859
param->session->mem_root= tmp_root;
3855
3861
if (func->array->count > NOT_IN_IGNORE_THRESHOLD || !value_item)
3858
3864
/* Get a SEL_TREE for "(-inf|NULL) < X < c_0" interval. */
3862
3868
func->array->value_to_item(i, value_item);
3863
3869
tree= get_mm_parts(param, cond_func, field, Item_func::LT_FUNC,
4010
4016
Yet a predicate of the form (c BETWEEN f1i AND f2i) is processed
4011
4017
differently. It is considered as a conjuction of two SARGable
4012
4018
predicates (f1i <= c) and (f2i <=c) and the function get_full_func_mm_tree
4013
is called for each of them separately producing trees for
4014
AND j (f1j <=c ) and AND j (f2j <= c)
4019
is called for each of them separately producing trees for
4020
AND j (f1j <=c ) and AND j (f2j <= c)
4015
4021
After this these two trees are united in one conjunctive tree.
4016
4022
It's easy to see that the same tree is obtained for
4017
4023
AND j,k (f1j <=c AND f2k<=c)
4018
which is equivalent to
4024
which is equivalent to
4019
4025
AND j,k (c BETWEEN f1j AND f2k).
4020
4026
The validity of the processing of the predicate (c NOT BETWEEN f1i AND f2i)
4021
4027
which equivalent to (f1i > c OR f2i < c) is not so obvious. Here the
4022
4028
function get_full_func_mm_tree is called for (f1i > c) and (f2i < c)
4023
4029
producing trees for AND j (f1j > c) and AND j (f2j < c). Then this two
4024
trees are united in one OR-tree. The expression
4030
trees are united in one OR-tree. The expression
4025
4031
(AND j (f1j > c) OR AND j (f2j < c)
4026
4032
is equivalent to the expression
4027
AND j,k (f1j > c OR f2k < c)
4028
which is just a translation of
4033
AND j,k (f1j > c OR f2k < c)
4034
which is just a translation of
4029
4035
AND j,k (c NOT BETWEEN f1j AND f2k)
4031
4037
In the cases when one of the items f1, f2 is a constant c1 we do not create
4038
4044
As to IN predicates only ones of the form (f IN (c1,...,cn)),
4039
4045
where f1 is a field and c1,...,cn are constant, are considered as
4040
4046
SARGable. We never try to narrow the index scan using predicates of
4041
the form (c IN (c1,...,f,...,cn)).
4047
the form (c IN (c1,...,f,...,cn)).
4044
4050
Pointer to the tree representing the built conjunction of SEL_TREEs
4047
4053
static SEL_TREE *get_full_func_mm_tree(RANGE_OPT_PARAM *param,
4048
4054
Item_func *cond_func,
4049
Item_field *field_item, Item *value,
4055
Item_field *field_item, Item *value,
4052
4058
SEL_TREE *tree= 0;
4137
4143
if (cond->const_item())
4140
During the cond->val_int() evaluation we can come across a subselect
4141
item which may allocate memory on the thd->mem_root and assumes
4142
all the memory allocated has the same life span as the subselect
4146
During the cond->val_int() evaluation we can come across a subselect
4147
item which may allocate memory on the session->mem_root and assumes
4148
all the memory allocated has the same life span as the subselect
4143
4149
item itself. So we have to restore the thread's mem_root here.
4145
4151
MEM_ROOT *tmp_root= param->mem_root;
4146
param->thd->mem_root= param->old_root;
4152
param->session->mem_root= param->old_root;
4147
4153
tree= cond->val_int() ? new(tmp_root) SEL_TREE(SEL_TREE::ALWAYS) :
4148
4154
new(tmp_root) SEL_TREE(SEL_TREE::IMPOSSIBLE);
4149
param->thd->mem_root= tmp_root;
4155
param->session->mem_root= tmp_root;
4259
4265
static SEL_TREE *
4260
4266
get_mm_parts(RANGE_OPT_PARAM *param, COND *cond_func, Field *field,
4261
4267
Item_func::Functype type,
4263
Item_result cmp_type __attribute__((unused)))
4268
Item *value, Item_result)
4265
4270
if (field->table != param->table)
4268
4273
KEY_PART *key_part = param->key_parts;
4269
4274
KEY_PART *end = param->key_parts_end;
4270
4275
SEL_TREE *tree=0;
4272
4277
value->used_tables() & ~(param->prev_tables | param->read_tables))
4274
4279
for (; key_part != end ; key_part++)
4276
4281
if (field->eq(key_part->field))
4278
4283
SEL_ARG *sel_arg=0;
4279
4284
if (!tree && !(tree=new SEL_TREE()))
4281
4286
if (!value || !(value->used_tables() & ~param->read_tables))
4283
4288
sel_arg=get_mm_leaf(param,cond_func,
4465
4469
value->result_type() != STRING_RESULT &&
4466
4470
field->cmp_type() != value->result_type())
4468
/* For comparison purposes allow invalid dates like 2000-01-32 */
4469
orig_sql_mode= field->table->in_use->variables.sql_mode;
4470
if (value->real_item()->type() == Item::STRING_ITEM &&
4471
(field->type() == DRIZZLE_TYPE_NEWDATE ||
4472
field->type() == DRIZZLE_TYPE_DATETIME))
4473
field->table->in_use->variables.sql_mode|= MODE_INVALID_DATES;
4474
err= value->save_in_field_no_warnings(field, 1);
4474
* Some notes from Jay...
4476
* OK, so previously, and in MySQL, what the optimizer does here is
4477
* override the sql_mode variable to ignore out-of-range or bad date-
4478
* time values. It does this because the optimizer is populating the
4479
* field variable with the incoming value from the comparison field,
4480
* and the value may exceed the bounds of a proper column type.
4482
* For instance, assume the following:
4484
* CREATE TABLE t1 (ts TIMESTAMP);
4485
* INSERT INTO t1 ('2009-03-04 00:00:00');
4486
* CREATE TABLE t2 (dt1 DATETIME, dt2 DATETIME);
4487
* INSERT INT t2 ('2003-12-31 00:00:00','2999-12-31 00:00:00');
4489
* If we issue this query:
4491
* SELECT * FROM t1, t2 WHERE t1.ts BETWEEN t2.dt1 AND t2.dt2;
4493
* We will come into bounds issues. Field_timestamp::store() will be
4494
* called with a datetime value of "2999-12-31 00:00:00" and will throw
4495
* an error for out-of-bounds. MySQL solves this via a hack with sql_mode
4496
* but Drizzle always throws errors on bad data storage in a Field class.
4498
* Therefore, to get around the problem of the Field class being used for
4499
* "storage" here without actually storing anything...we must check to see
4500
* if the value being stored in a Field_timestamp here is out of range. If
4501
* it is, then we must convert to the highest Timestamp value (or lowest,
4502
* depending on whether the datetime is before or after the epoch.
4504
if (field->type() == DRIZZLE_TYPE_TIMESTAMP)
4507
* The left-side of the range comparison is a timestamp field. Therefore,
4508
* we must check to see if the value in the right-hand side is outside the
4509
* range of the UNIX epoch, and cut to the epoch bounds if it is.
4511
/* Datetime and date columns are Item::FIELD_ITEM ... and have a result type of STRING_RESULT */
4512
if (value->real_item()->type() == Item::FIELD_ITEM
4513
&& value->result_type() == STRING_RESULT)
4515
char buff[MAX_DATETIME_FULL_WIDTH];
4516
String tmp(buff, sizeof(buff), &my_charset_bin);
4517
String *res= value->val_str(&tmp);
4524
* Create a datetime from the string and compare to fixed timestamp
4525
* instances representing the epoch boundaries.
4527
drizzled::DateTime value_datetime;
4529
if (! value_datetime.from_string(res->c_ptr(), (size_t) res->length()))
4532
drizzled::Timestamp max_timestamp;
4533
drizzled::Timestamp min_timestamp;
4535
(void) max_timestamp.from_time_t((time_t) INT32_MAX);
4536
(void) min_timestamp.from_time_t((time_t) 0);
4538
/* We rely on Temporal class operator overloads to do our comparisons. */
4539
if (value_datetime < min_timestamp)
4542
* Datetime in right-hand side column is before UNIX epoch, so adjust to
4545
char new_value_buff[MAX_DATETIME_FULL_WIDTH];
4546
size_t new_value_length;
4547
String new_value_string(new_value_buff, sizeof(new_value_buff), &my_charset_bin);
4549
min_timestamp.to_string(new_value_string.c_ptr(), &new_value_length);
4550
new_value_string.length(new_value_length);
4551
err= value->save_str_value_in_field(field, &new_value_string);
4553
else if (value_datetime > max_timestamp)
4556
* Datetime in right hand side column is after UNIX epoch, so adjust
4557
* to the higher bound of the epoch.
4559
char new_value_buff[MAX_DATETIME_FULL_WIDTH];
4560
size_t new_value_length;
4561
String new_value_string(new_value_buff, sizeof(new_value_buff), &my_charset_bin);
4563
max_timestamp.to_string(new_value_string.c_ptr(), &new_value_length);
4564
new_value_string.length(new_value_length);
4565
err= value->save_str_value_in_field(field, &new_value_string);
4568
err= value->save_in_field(field, 1);
4571
else /* Not a datetime -> timestamp comparison */
4572
err= value->save_in_field(field, 1);
4574
else /* Not a timestamp comparison */
4575
err= value->save_in_field(field, 1);
4477
4579
if (field->cmp_type() != value->result_type())
4793
4893
call tree_and(tree1, tree2) -- this joins SEL_ARGs into a usable SEL_ARG
4796
4896
There is an exception though: when we construct index_merge SEL_TREE,
4797
4897
any SEL_ARG* tree that cannot be used to construct quick range select can
4798
4898
be removed, because current range analysis code doesn't provide any way
4799
4899
that tree could be later combined with another tree.
4800
4900
Consider an example: we should not construct
4802
merges = SEL_IMERGE {
4803
SEL_TREE(t.key1part1 = 1),
4902
merges = SEL_IMERGE {
4903
SEL_TREE(t.key1part1 = 1),
4804
4904
SEL_TREE(t.key2part2 = 2) -- (*)
4808
- (*) cannot be used to construct quick range select,
4809
- There is no execution path that would cause (*) to be converted to
4908
- (*) cannot be used to construct quick range select,
4909
- There is no execution path that would cause (*) to be converted to
4810
4910
a tree that could be used.
4812
4912
The latter is easy to verify: first, notice that the only way to convert
4813
4913
(*) into a usable tree is to call tree_and(something, (*)).
4815
4915
Second look at what tree_and/tree_or function would do when passed a
4816
SEL_TREE that has the structure like st1 tree has, and conlcude that
4916
SEL_TREE that has the structure like st1 tree has, and conlcude that
4817
4917
tree_and(something, (*)) will not be called.
5729
5834
return 0; // Found end of tree
5730
5835
if (element->parent != parent)
5732
sql_print_error("Wrong tree: Parent doesn't point at parent");
5837
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Parent doesn't point at parent");
5735
5840
if (element->color == SEL_ARG::RED &&
5736
5841
(element->left->color == SEL_ARG::RED ||
5737
5842
element->right->color == SEL_ARG::RED))
5739
sql_print_error("Wrong tree: Found two red in a row");
5844
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Found two red in a row");
5742
5847
if (element->left == element->right && element->left != &null_element)
5743
5848
{ // Dummy test
5744
sql_print_error("Wrong tree: Found right == left");
5849
errmsg_printf(ERRMSG_LVL_ERROR, "Wrong tree: Found right == left");
5747
5852
count_l=test_rb_tree(element->left,element);
5772
5877
This function counts how many times the node "key" is referred (via
5773
SEL_ARG::next_key_part) by
5774
- intervals of RB-tree pointed by "root",
5775
- intervals of RB-trees that are pointed by SEL_ARG::next_key_part from
5878
SEL_ARG::next_key_part) by
5879
- intervals of RB-tree pointed by "root",
5880
- intervals of RB-trees that are pointed by SEL_ARG::next_key_part from
5776
5881
intervals of RB-tree pointed by "root",
5779
Here is an example (horizontal links represent next_key_part pointers,
5780
vertical links - next/prev prev pointers):
5884
Here is an example (horizontal links represent next_key_part pointers,
5885
vertical links - next/prev prev pointers):
5783
5888
|root|-----------------+
5870
5975
****************************************************************************/
5872
5977
/* MRR range sequence, SEL_ARG* implementation: stack entry */
5873
typedef struct st_range_seq_entry
5978
typedef struct st_range_seq_entry
5876
5981
Pointers in min and max keys. They point to right-after-end of key
5877
5982
images. The 0-th entry has these pointing to key tuple start.
5879
5984
unsigned char *min_key, *max_key;
5882
5987
Flags, for {keypart0, keypart1, ... this_keypart} subtuple.
5883
5988
min_key_flag may have NULL_RANGE set.
5885
5990
uint32_t min_key_flag, max_key_flag;
5887
5992
/* Number of key parts */
5888
5993
uint32_t min_key_parts, max_key_parts;
5889
5994
SEL_ARG *key_tree;
6085
6188
/* Ok got a tuple */
6086
6189
RANGE_SEQ_ENTRY *cur= &seq->stack[seq->i];
6088
6191
range->ptr= (char*)(int)(key_tree->part);
6090
6193
range->range_flag= cur->min_key_flag | cur->max_key_flag;
6092
6195
range->start_key.key= seq->param->min_key;
6093
6196
range->start_key.length= cur->min_key - seq->param->min_key;
6094
6197
range->start_key.keypart_map= make_prev_keypart_map(cur->min_key_parts);
6095
range->start_key.flag= (cur->min_key_flag & NEAR_MIN ? HA_READ_AFTER_KEY :
6198
range->start_key.flag= (cur->min_key_flag & NEAR_MIN ? HA_READ_AFTER_KEY :
6096
6199
HA_READ_KEY_EXACT);
6098
6201
range->end_key.key= seq->param->max_key;
6099
6202
range->end_key.length= cur->max_key - seq->param->max_key;
6100
range->end_key.flag= (cur->max_key_flag & NEAR_MAX ? HA_READ_BEFORE_KEY :
6203
range->end_key.flag= (cur->max_key_flag & NEAR_MAX ? HA_READ_BEFORE_KEY :
6101
6204
HA_READ_AFTER_KEY);
6102
6205
range->end_key.keypart_map= make_prev_keypart_map(cur->max_key_parts);
6104
6207
if (!(cur->min_key_flag & ~NULL_RANGE) && !cur->max_key_flag &&
6105
(uint)key_tree->part+1 == seq->param->table->key_info[seq->real_keyno].key_parts &&
6208
(uint32_t)key_tree->part+1 == seq->param->table->key_info[seq->real_keyno].key_parts &&
6106
6209
(seq->param->table->key_info[seq->real_keyno].flags & (HA_NOSAME)) ==
6108
6211
range->start_key.length == range->end_key.length &&
6109
6212
!memcmp(seq->param->min_key,seq->param->max_key,range->start_key.length))
6110
6213
range->range_flag= UNIQUE_RANGE | (cur->min_key_flag & NULL_RANGE);
6112
6215
if (seq->param->is_ror_scan)
6190
6293
param->is_ror_scan= true;
6191
6294
if (file->index_flags(keynr, 0, true) & HA_KEY_SCAN_NOT_ROR)
6192
6295
param->is_ror_scan= false;
6194
6297
*mrr_flags= param->force_default_mrr? HA_MRR_USE_DEFAULT_IMPL: 0;
6195
6298
*mrr_flags|= HA_MRR_NO_ASSOCIATION;
6197
6300
bool pk_is_clustered= file->primary_key_is_clustered();
6199
6302
(file->index_flags(keynr, param->max_key_part, 1) & HA_KEYREAD_ONLY) &&
6200
6303
!(pk_is_clustered && keynr == param->table->s->primary_key))
6201
6304
*mrr_flags |= HA_MRR_INDEX_ONLY;
6203
if (current_thd->lex->sql_command != SQLCOM_SELECT)
6306
if (current_session->lex->sql_command != SQLCOM_SELECT)
6204
6307
*mrr_flags |= HA_MRR_USE_DEFAULT_IMPL;
6206
*bufsize= param->thd->variables.read_rnd_buff_size;
6309
*bufsize= param->session->variables.read_rnd_buff_size;
6207
6310
rows= file->multi_range_read_info_const(keynr, &seq_if, (void*)&seq, 0,
6208
6311
bufsize, mrr_flags, cost);
6209
6312
if (rows != HA_POS_ERROR)
6264
6367
where the index is defined on (key1_1, ..., key1_N [,a_1, ..., a_n])
6266
and the table has a clustered Primary Key defined as
6267
PRIMARY KEY(a_1, ..., a_n, b1, ..., b_k)
6269
i.e. the first key parts of it are identical to uncovered parts ot the
6369
and the table has a clustered Primary Key defined as
6370
PRIMARY KEY(a_1, ..., a_n, b1, ..., b_k)
6372
i.e. the first key parts of it are identical to uncovered parts ot the
6270
6373
key being scanned. This function assumes that the index flags do not
6271
6374
include HA_KEY_SCAN_NOT_ROR flag (that is checked elsewhere).
6467
6570
/* Get range for retrieving rows in QUICK_SELECT::get_next */
6468
6571
if (!(range= new QUICK_RANGE(param->min_key,
6469
(uint) (tmp_min_key - param->min_key),
6572
(uint32_t) (tmp_min_key - param->min_key),
6470
6573
min_part >=0 ? make_keypart_map(min_part) : 0,
6471
6574
param->max_key,
6472
(uint) (tmp_max_key - param->max_key),
6575
(uint32_t) (tmp_max_key - param->max_key),
6473
6576
max_part >=0 ? make_keypart_map(max_part) : 0,
6475
6578
return 1; // out of memory
6477
set_if_bigger(quick->max_used_key_length, range->min_length);
6478
set_if_bigger(quick->max_used_key_length, range->max_length);
6479
set_if_bigger(quick->used_key_parts, (uint) key_tree->part+1);
6580
set_if_bigger(quick->max_used_key_length, (uint32_t)range->min_length);
6581
set_if_bigger(quick->max_used_key_length, (uint32_t)range->max_length);
6582
set_if_bigger(quick->used_key_parts, (uint32_t) key_tree->part+1);
6480
6583
if (insert_dynamic(&quick->ranges, (unsigned char*) &range))
6539
bool QUICK_SELECT_I::is_keys_used(const MY_BITMAP *fields)
6642
bool QUICK_SELECT_I::is_keys_used(const bitset<MAX_FIELDS> *fields)
6541
6644
return is_key_used(head, index, fields);
6544
bool QUICK_INDEX_MERGE_SELECT::is_keys_used(const MY_BITMAP *fields)
6546
QUICK_RANGE_SELECT *quick;
6547
List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
6548
while ((quick= it++))
6550
if (is_key_used(head, quick->index, fields))
6556
bool QUICK_ROR_INTERSECT_SELECT::is_keys_used(const MY_BITMAP *fields)
6558
QUICK_RANGE_SELECT *quick;
6559
List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
6560
while ((quick= it++))
6562
if (is_key_used(head, quick->index, fields))
6568
bool QUICK_ROR_UNION_SELECT::is_keys_used(const MY_BITMAP *fields)
6647
bool QUICK_INDEX_MERGE_SELECT::is_keys_used(const bitset<MAX_FIELDS> *fields)
6649
QUICK_RANGE_SELECT *quick;
6650
List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
6651
while ((quick= it++))
6653
if (is_key_used(head, quick->index, fields))
6659
bool QUICK_ROR_INTERSECT_SELECT::is_keys_used(const bitset<MAX_FIELDS> *fields)
6661
QUICK_RANGE_SELECT *quick;
6662
List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
6663
while ((quick= it++))
6665
if (is_key_used(head, quick->index, fields))
6671
bool QUICK_ROR_UNION_SELECT::is_keys_used(const bitset<MAX_FIELDS> *fields)
6570
6673
QUICK_SELECT_I *quick;
6571
6674
List_iterator_fast<QUICK_SELECT_I> it(quick_selects);
6609
6712
bool create_err= false;
6610
6713
COST_VECT cost;
6612
old_root= thd->mem_root;
6613
/* The following call may change thd->mem_root */
6614
quick= new QUICK_RANGE_SELECT(thd, table, ref->key, 0, 0, &create_err);
6715
old_root= session->mem_root;
6716
/* The following call may change session->mem_root */
6717
quick= new QUICK_RANGE_SELECT(session, table, ref->key, 0, 0);
6615
6718
/* save mem_root set by QUICK_RANGE_SELECT constructor */
6616
alloc= thd->mem_root;
6719
alloc= session->mem_root;
6618
return back default mem_root (thd->mem_root) changed by
6721
return back default mem_root (session->mem_root) changed by
6619
6722
QUICK_RANGE_SELECT constructor
6621
thd->mem_root= old_root;
6724
session->mem_root= old_root;
6623
6726
if (!quick || create_err)
6624
6727
return 0; /* no ranges found */
6678
6781
/* Call multi_range_read_info() to get the MRR flags and buffer size */
6679
quick->mrr_flags= HA_MRR_NO_ASSOCIATION |
6782
quick->mrr_flags= HA_MRR_NO_ASSOCIATION |
6680
6783
(table->key_read ? HA_MRR_INDEX_ONLY : 0);
6681
if (thd->lex->sql_command != SQLCOM_SELECT)
6784
if (session->lex->sql_command != SQLCOM_SELECT)
6682
6785
quick->mrr_flags |= HA_MRR_USE_DEFAULT_IMPL;
6684
quick->mrr_buf_size= thd->variables.read_rnd_buff_size;
6685
if (table->file->multi_range_read_info(quick->index, 1, (uint)records,
6787
quick->mrr_buf_size= session->variables.read_rnd_buff_size;
6788
if (table->file->multi_range_read_info(quick->index, 1, (uint32_t)records,
6686
6789
&quick->mrr_buf_size,
6687
6790
&quick->mrr_flags, &cost))
7029
7133
Range sequence interface implementation for array<QUICK_RANGE>: initialize
7032
7136
quick_range_seq_init()
7033
7137
init_param Caller-opaque paramenter: QUICK_RANGE_SELECT* pointer
7034
7138
n_ranges Number of ranges in the sequence (ignored)
7035
flags MRR flags (currently not used)
7139
flags MRR flags (currently not used)
7038
7142
Opaque value to be passed to quick_range_seq_next
7041
range_seq_t quick_range_seq_init(void *init_param,
7042
uint32_t n_ranges __attribute__((unused)),
7043
uint32_t flags __attribute__((unused)))
7145
range_seq_t quick_range_seq_init(void *init_param, uint32_t, uint32_t)
7045
7147
QUICK_RANGE_SELECT *quick= (QUICK_RANGE_SELECT*)init_param;
7046
7148
quick->qr_traversal_ctx.first= (QUICK_RANGE**)quick->ranges.buffer;
7855
7954
/* Perform few 'cheap' tests whether this access method is applicable. */
7857
return(NULL); /* This is not a select statement. */
7956
return NULL; /* This is not a select statement. */
7858
7957
if ((join->tables != 1) || /* The query must reference one table. */
7859
7958
((!join->group_list) && /* Neither GROUP BY nor a DISTINCT query. */
7860
7959
(!join->select_distinct)) ||
7861
7960
(join->select_lex->olap == ROLLUP_TYPE)) /* Check (B3) for ROLLUP */
7863
7962
if (table->s->keys == 0) /* There are no indexes to use. */
7866
7965
/* Analyze the query in more detail. */
7867
7966
List_iterator<Item> select_items_it(join->fields_list);
7869
7968
/* Check (SA1,SA4) and store the only MIN/MAX argument - the C attribute.*/
7870
7969
if (join->make_sum_func_list(join->all_fields, join->fields_list, 1))
7872
7971
if (join->sum_funcs[0])
7874
7973
Item_sum *min_max_item;
8392
get_constant_key_infix(KEY *index_info __attribute__((unused)),
8393
SEL_ARG *index_range_tree,
8491
get_constant_key_infix(KEY *, SEL_ARG *index_range_tree,
8394
8492
KEY_PART_INFO *first_non_group_part,
8395
8493
KEY_PART_INFO *min_max_arg_part,
8396
8494
KEY_PART_INFO *last_part,
8397
THD *thd __attribute__((unused)),
8398
unsigned char *key_infix, uint32_t *key_infix_len,
8495
Session *, unsigned char *key_infix, uint32_t *key_infix_len,
8399
8496
KEY_PART_INFO **first_non_infix_part)
8401
8498
SEL_ARG *cur_range;
8609
8705
keys_per_block= (table->file->stats.block_size / 2 /
8610
8706
(index_info->key_length + table->file->ref_length)
8612
num_blocks= (uint)(table_records / keys_per_block) + 1;
8708
num_blocks= (uint32_t)(table_records / keys_per_block) + 1;
8614
8710
/* Compute the number of keys in a group. */
8615
8711
keys_per_group= index_info->rec_per_key[group_key_parts - 1];
8616
8712
if (keys_per_group == 0) /* If there is no statistics try to guess */
8617
8713
/* each group contains 10% of all records */
8618
keys_per_group= (uint)(table_records / 10) + 1;
8619
num_groups= (uint)(table_records / keys_per_group) + 1;
8714
keys_per_group= (uint32_t)(table_records / 10) + 1;
8715
num_groups= (uint32_t)(table_records / keys_per_group) + 1;
8621
8717
/* Apply the selectivity of the quick select for group prefixes. */
8622
8718
if (range_tree && (quick_prefix_records != HA_POS_ERROR))
8624
8720
quick_prefix_selectivity= (double) quick_prefix_records /
8625
8721
(double) table_records;
8626
num_groups= (uint) rint(num_groups * quick_prefix_selectivity);
8627
set_if_bigger(num_groups, 1);
8722
num_groups= (uint32_t) rint(num_groups * quick_prefix_selectivity);
8723
set_if_bigger(num_groups, 1U);
8630
8726
if (used_key_parts > group_key_parts)
8686
8781
QUICK_SELECT_I *
8687
TRP_GROUP_MIN_MAX::make_quick(PARAM *param,
8688
bool retrieve_full_rows __attribute__((unused)),
8689
MEM_ROOT *parent_alloc)
8782
TRP_GROUP_MIN_MAX::make_quick(PARAM *param, bool, MEM_ROOT *parent_alloc)
8691
8784
QUICK_GROUP_MIN_MAX_SELECT *quick;
8693
8786
quick= new QUICK_GROUP_MIN_MAX_SELECT(param->table,
8694
param->thd->lex->current_select->join,
8787
param->session->lex->current_select->join,
8695
8788
have_min, have_max, min_max_arg_part,
8696
8789
group_prefix_len, group_key_parts,
8697
8790
used_key_parts, index_info, index,
8698
8791
read_cost, records, key_infix_len,
8699
8792
key_infix, parent_alloc);
8703
8796
if (quick->init())
8709
8802
if (range_tree)
9510
9605
if ( !(cur_range->flag & NO_MAX_RANGE) )
9512
9607
/* Compose the MAX key for the range. */
9513
unsigned char *max_key= (unsigned char*) my_alloca(real_prefix_len + min_max_arg_len);
9514
memcpy(max_key, group_prefix, real_prefix_len);
9515
memcpy(max_key + real_prefix_len, cur_range->max_key,
9516
cur_range->max_length);
9609
max_key.append(group_prefix, real_prefix_len);
9610
max_key.append(cur_range->max_key, cur_range->max_length);
9517
9611
/* Compare the found key with max_key. */
9518
int cmp_res= key_cmp(index_info->key_part, max_key,
9612
int cmp_res= key_cmp(index_info->key_part,
9519
9614
real_prefix_len + min_max_arg_len);
9520
9615
if ((!((cur_range->flag & NEAR_MAX) && (cmp_res == -1)) || (cmp_res <= 0)))
9627
9724
if ( !(cur_range->flag & NO_MIN_RANGE) )
9629
9726
/* Compose the MIN key for the range. */
9630
unsigned char *min_key= (unsigned char*) my_alloca(real_prefix_len + min_max_arg_len);
9631
memcpy(min_key, group_prefix, real_prefix_len);
9632
memcpy(min_key + real_prefix_len, cur_range->min_key,
9633
cur_range->min_length);
9728
min_key.append(group_prefix, real_prefix_len);
9729
min_key.append(cur_range->min_key, cur_range->min_length);
9634
9731
/* Compare the found key with min_key. */
9635
int cmp_res= key_cmp(index_info->key_part, min_key,
9732
int cmp_res= key_cmp(index_info->key_part,
9636
9734
real_prefix_len + min_max_arg_len);
9637
9735
if ((!((cur_range->flag & NEAR_MIN) && (cmp_res == 1)) ||
9638
9736
(cmp_res >= 0)))