Saturday, July 15, 2017

Skip duplicate keys on inserts and log errors

Sometimes when you do a bulk insert, the statement will fail due to duplicate keys. Unfortunately you don't get any information on what records failed. When the database encounters the first error, it rolls back the entire transaction no matter how many records were successful.

Good news! There is a way to insert all the good records and list all the rows that were duplicates. In our example we got duplicate key errors inserting records into the proddta.f3013 table from the same table in another environment on another server.

First, create a table to log the errors. You put the name of the table being logged in the dbl_table_name parameter.

begin
dbms_errlog.create_error_log (dml_table_name => 'proddta.f3013');
end;

That procedure creates a table called proddta.err$_f3013. Next, put the error logging info at the end of your SQL statement that failed. Look below to the full SQL: It's a standard "insert into select *" statement, but we're using a database link called JDENP_PUB to grab data from another server.

The actual meat of the error logging starts with "log errors into" followed by the name of your error log table. Next is the error tag, which in this case is simply ('INSERT'). You can put anything you want in there, it doesn't matter. Finally, the reject limit specifies how many failures you want to get before stopping. Setting it to unlimited tells it to process all records.

insert into proddta.f3013 select * from cvdta.f3013@JDENP_PUB log errors into proddta.err$_f3013 ('INSERT') reject limit unlimited;

The statement will finish without any fanfare. At this point you can commit the statement and view your error log.



All the records that failed will be listed. You can export that to any file type (csv, excel file, etc) and send it back to the client for review.