I currently use the below code to insert data from work into a permanent table. I was wondering if there is a more efficient way of doing this?
The process is:
Import CSV
Check to see if transaction ID is in Permanent SAS table if not insert
Data Work.temp;
Set Lib.Permanent_Table (firstobs=30000000);
Keep Transaction_ID;
Run;
Proc sql;
INSERT INTO Lib.Permanent_Table
SELECT * FROM WORK.Table01
WHERE 'Transaction_Id'n NOT IN
(SELECT 'Transaction_Id'n FROM work.temp)
;
drop table work.temp
;
QUIT;
The Permanent table has 41,995,622 observations and I am inserting another 150k every day with 90 columns.
Thanks
Take a look at the Update Statement in the data step.
A couple of ideas to try ...
First, create the set of IDs more quickly:
data work.temp;
set libpermanent_Table (firstobs=30000000 keep=Transaction_ID);
run;
Your original data step read in all 90 variables, then dropped 89 of them.
Second (since I'm not really familiar with the speed of SQL's INSERT, so this would be an experiment), create the subset of Table_01 separately from adding it to the permanent data set:
proc sql;
create table subset as
select * from work.Table01
where Transaction_ID not in
select Transaction_ID from work.temp;
quit;
Then appending might be much faster:
proc append data=subset base=lib.Permanent_Table;
run;
The variable definitions should match 100% to do this, however. And it should be an acceptable solution to add the new observations to the end of the existing data set.
To think of it the other way round:
proc sql; create table data_to_add as select * from csv_data where transaction_id not in (select distinct transaction_id from permanent_table); run; proc append base=permanent_table data=data_to_add; run;
Avoid this:
'Transaction_Id'n
If you are using SAS, use SAS naming conventions for your variables. The only time ' 'n named literals do anything other than make your code more messy is when you have to deal with Excel or other poor datasources.
Ninety columns? 150K new observations daily? I'd suggest maintaining a second dataset of just the current transaction_id's (call it lib.id_list) . Then you could
data new_ids (keep=transaction_id)
data_to_append /view=data_to_append;
set new_transactions;
if _n_=1 then do;
declare hash h (dataset:'lib.id_list');
h.definekey('transaction_id');
h.definedone();
end;
if h.find() ^=0;
run;
proc append base=lib.permanenttable append=data_to_append;
run;
proc append base=lib.id_list data=new_ids;
run;
Edited additional note. If your original 41,995,622 observations all have unique transaction_id's, then I would modify the program above to establish as many "buckets" as possible in hash object h. So change the DECLARE statement to
declare hash h (dataset:'lib.id_list',hashexp:16);
which tells SAS to make 2**16=64K buckets, each with about 641 transaction_id's. The h.find() method will surely be faster than with the default hashexp:8 (2**8 buckets each with about 164,000 transaction_id's). hashexp:16 is the maximum supported value.
First, make sure you have an index defined on Transaction_Id in your permanent table.
Then try:
Proc sql;
create table Table02 as
select * from Table01
where Transaction_Id not in (select Transaction_Id from lib.Permanent_Table);
INSERT INTO Lib.Permanent_Table
SELECT * FROM WORK.Table02;
drop table work.table02;
QUIT;
(untested)
Your code will certainly be superior to the original OP code. But an index on 41M unique transaction id's (I assume they are unique) will yield a binary search tree with depth about 25. I would be interested in comparing its performance to the hash solution I suggested. This has long been one of the established advantages of hash search vs binary search, and I presume it would apply in this case.
Join us for SAS Innovate 2025, our biggest and most exciting global event of the year, in Orlando, FL, from May 6-9. Sign up by March 14 for just $795.
Learn how use the CAT functions in SAS to join values from multiple variables into a single value.
Find more tutorials on the SAS Users YouTube channel.
Ready to level-up your skills? Choose your own adventure.