127
127
bool using_limit= limit != HA_POS_ERROR;
128
bool safe_update= test(session->options & OPTION_SAFE_UPDATES);
129
bool used_key_is_modified, transactional_table, will_batch;
128
bool used_key_is_modified;
129
bool transactional_table;
130
130
bool can_compare_record;
131
int error, loc_error;
132
132
uint used_index= MAX_KEY, dup_key_found;
133
133
bool need_sort= true;
134
134
ha_rows updated, found;
215
215
select= optimizer::make_select(table, 0, 0, conds, 0, &error);
216
216
if (error || !limit ||
217
(select && select->check_quick(session, safe_update, limit)))
217
(select && select->check_quick(session, false, limit)))
238
238
if (table->quick_keys.none())
240
240
session->server_status|=SERVER_QUERY_NO_INDEX_USED;
241
if (safe_update && !using_limit)
243
my_message(ER_UPDATE_WITHOUT_KEY_IN_SAFE_MODE,
244
ER(ER_UPDATE_WITHOUT_KEY_IN_SAFE_MODE), MYF(0));
249
243
table->mark_columns_needed_for_update();
450
443
continue; /* repeat the read of the same row if it still exists */
452
445
table->storeRecord();
453
if (fill_record(session, fields, values, 0))
446
if (fill_record(session, fields, values))
458
451
if (!can_compare_record || table->compare_record())
463
Typically a batched handler can execute the batched jobs when:
464
1) When specifically told to do so
465
2) When it is not a good idea to batch anymore
466
3) When it is necessary to send batch for other reasons
467
(One such reason is when READ's must be performed)
469
1) is covered by exec_bulk_update calls.
470
2) and 3) is handled by the bulk_update_row method.
472
bulk_update_row can execute the updates including the one
473
defined in the bulk_update_row or not including the row
474
in the call. This is up to the handler implementation and can
475
vary from call to call.
477
The dup_key_found reports the number of duplicate keys found
478
in those updates actually executed. It only reports those if
479
the extra call with HA_EXTRA_IGNORE_DUP_KEY have been issued.
480
If this hasn't been issued it returns an error code and can
481
ignore this number. Thus any handler that implements batching
482
for UPDATE IGNORE must also handle this extra call properly.
484
If a duplicate key is found on the record included in this
485
call then it should be included in the count of dup_key_found
486
and error should be set to 0 (only if these errors are ignored).
488
error= table->cursor->ha_bulk_update_row(table->record[1],
491
limit+= dup_key_found;
492
updated-= dup_key_found;
496
/* Non-batched update */
497
error= table->cursor->ha_update_row(table->record[1],
453
/* Non-batched update */
454
error= table->cursor->ha_update_row(table->record[1],
498
455
table->record[0]);
500
456
if (!error || error == HA_ERR_RECORD_IS_THE_SAME)
502
458
if (error != HA_ERR_RECORD_IS_THE_SAME)
526
482
if (!--limit && using_limit)
529
We have reached end-of-cursor in most common situations where no
530
batching has occurred and if batching was supposed to occur but
531
no updates were made and finally when the batch execution was
532
performed without error and without finding any duplicate keys.
533
If the batched updates were performed with errors we need to
534
check and if no error but duplicate key's found we need to
535
continue since those are not counted for in limit.
538
((error= table->cursor->exec_bulk_update(&dup_key_found)) ||
544
The handler should not report error of duplicate keys if they
545
are ignored. This is a requirement on batching handlers.
547
prepare_record_for_error_message(error, table);
548
table->print_error(error,MYF(0));
553
Either an error was found and we are ignoring errors or there
554
were duplicate keys found. In both cases we need to correct
555
the counters and continue the loop.
557
limit= dup_key_found; //limit is 0 when we get here so need to +
558
updated-= dup_key_found;
562
error= -1; // Simulate end of cursor
484
error= -1; // Simulate end of cursor
581
502
// simulated killing after the loop must be ineffective for binlogging
582
503
error= (killed_status == Session::NOT_KILLED)? error : 1;
586
(loc_error= table->cursor->exec_bulk_update(&dup_key_found)))
588
An error has occurred when a batched update was performed and returned
589
an error indication. It cannot be an allowed duplicate key error since
590
we require the batching handler to treat this as a normal behavior.
592
Otherwise we simply remove the number of duplicate keys records found
593
in the batched update.
596
prepare_record_for_error_message(loc_error, table);
597
table->print_error(loc_error,MYF(ME_FATALERROR));
601
updated-= dup_key_found;
603
table->cursor->end_bulk_update();
505
updated-= dup_key_found;
604
506
table->cursor->try_semi_consistent_read(0);
606
508
if (!transactional_table && updated > 0)