04-09-2015 12:28 PM
I have a large dataset that is appended to multiple datasets (i.e. monthly and history oracle tables).
Currently I am using bulkload to load it into these two Oracle tables. It is working just fine but it takes twice the amount of time since it creates the same DAT file again to load to next table.
Is there a way I can append this dataset to multiple datasets at the same time? Are there options to re-use the BL.DAT file without recreating? I googled for answers to no avail.
Thank you very much.
Have a great day.
04-09-2015 02:31 PM
It is the bulkload process that is the dedicated fastest Oracle loader SQL-loader that is creating tables at Oracle. http://support.sas.com/resources/papers/proceedings09/099-2009.pdf. Without the loading process is much slower (journaling/logging and commits). It is the SAS (Is it?) creating that dat-file yup SAS/ACCESS(R) 9.4 for Relational Databases: Reference, Sixth Edition
For reuse that dat-fil it must
a/ be able to be kept. BL_DELETE_DATAFILE=NO (yes is default)
b/ reuse that dataset letting SAS not create that .dat file.
I see what you mean there is nothing there to stop that.
The only options would be to start the SLQ-Loader manualy/scripted. SQL*Loader Control File Reference probably not worth all the effort introducing new risks/errors.
Tip: review the BL_DEFAULT_DIR='host-specific-directory-path' as that one should fast and big enough.
04-11-2015 11:52 AM
Thank you very much gentlemen. I think I am not going to spend too much time on this one.
I will keep bulkload for one process and for another load process, I think will just use INSERTBUFF=32767 buffsize=32000 in the libname option.
Thank you all. Have a great weekend.
04-12-2015 12:39 AM
I believe the "best" way would be to load the source table once via bulk into an Oracle staging table (not sure if it would work with a global temporary table) and then use pass-through SQL with an append hint in a multi-table insert. Something like below:
INSERT /*+APPEND */ ALL
INTO <Ora table 1>
INTO <Ora table 2>
SELECT * from <staging_table>
In case you have table partitioning then loading directly into a partition can also speed up things quite a bit.
"I think will just use INSERTBUFF=32767 buffsize=32000": Make sure you set also DBCOMMIT to something better than the installation default.