125
125
bool using_limit= limit != HA_POS_ERROR;
126
126
bool safe_update= test(session->options & OPTION_SAFE_UPDATES);
127
bool used_key_is_modified, transactional_table, will_batch;
127
bool used_key_is_modified;
128
bool transactional_table;
128
129
bool can_compare_record;
129
int error, loc_error;
130
131
uint used_index= MAX_KEY, dup_key_found;
131
132
bool need_sort= true;
132
133
ha_rows updated, found;
448
448
continue; /* repeat the read of the same row if it still exists */
450
450
table->storeRecord();
451
if (fill_record(session, fields, values, 0))
451
if (fill_record(session, fields, values))
456
456
if (!can_compare_record || table->compare_record())
461
Typically a batched handler can execute the batched jobs when:
462
1) When specifically told to do so
463
2) When it is not a good idea to batch anymore
464
3) When it is necessary to send batch for other reasons
465
(One such reason is when READ's must be performed)
467
1) is covered by exec_bulk_update calls.
468
2) and 3) is handled by the bulk_update_row method.
470
bulk_update_row can execute the updates including the one
471
defined in the bulk_update_row or not including the row
472
in the call. This is up to the handler implementation and can
473
vary from call to call.
475
The dup_key_found reports the number of duplicate keys found
476
in those updates actually executed. It only reports those if
477
the extra call with HA_EXTRA_IGNORE_DUP_KEY have been issued.
478
If this hasn't been issued it returns an error code and can
479
ignore this number. Thus any handler that implements batching
480
for UPDATE IGNORE must also handle this extra call properly.
482
If a duplicate key is found on the record included in this
483
call then it should be included in the count of dup_key_found
484
and error should be set to 0 (only if these errors are ignored).
486
error= table->cursor->ha_bulk_update_row(table->record[1],
489
limit+= dup_key_found;
490
updated-= dup_key_found;
494
/* Non-batched update */
495
error= table->cursor->ha_update_row(table->record[1],
458
/* Non-batched update */
459
error= table->cursor->ha_update_row(table->record[1],
496
460
table->record[0]);
498
461
if (!error || error == HA_ERR_RECORD_IS_THE_SAME)
500
463
if (error != HA_ERR_RECORD_IS_THE_SAME)
524
487
if (!--limit && using_limit)
527
We have reached end-of-cursor in most common situations where no
528
batching has occurred and if batching was supposed to occur but
529
no updates were made and finally when the batch execution was
530
performed without error and without finding any duplicate keys.
531
If the batched updates were performed with errors we need to
532
check and if no error but duplicate key's found we need to
533
continue since those are not counted for in limit.
536
((error= table->cursor->exec_bulk_update(&dup_key_found)) ||
542
The handler should not report error of duplicate keys if they
543
are ignored. This is a requirement on batching handlers.
545
prepare_record_for_error_message(error, table);
546
table->print_error(error,MYF(0));
551
Either an error was found and we are ignoring errors or there
552
were duplicate keys found. In both cases we need to correct
553
the counters and continue the loop.
555
limit= dup_key_found; //limit is 0 when we get here so need to +
556
updated-= dup_key_found;
560
error= -1; // Simulate end of cursor
489
error= -1; // Simulate end of cursor
579
507
// simulated killing after the loop must be ineffective for binlogging
580
508
error= (killed_status == Session::NOT_KILLED)? error : 1;
584
(loc_error= table->cursor->exec_bulk_update(&dup_key_found)))
586
An error has occurred when a batched update was performed and returned
587
an error indication. It cannot be an allowed duplicate key error since
588
we require the batching handler to treat this as a normal behavior.
590
Otherwise we simply remove the number of duplicate keys records found
591
in the batched update.
594
prepare_record_for_error_message(loc_error, table);
595
table->print_error(loc_error,MYF(ME_FATALERROR));
599
updated-= dup_key_found;
601
table->cursor->end_bulk_update();
510
updated-= dup_key_found;
602
511
table->cursor->try_semi_consistent_read(0);
604
513
if (!transactional_table && updated > 0)