Hi all,
I'm trying to split my data into 30 small files, so that with my limited memory they will be processed faster. I was able to write code using data step like below, but I do not know how to change it to proc sql. I want to use proc sql, so I can use the threads option. I have a loop, this is also what makes it hard for me.Can you help me?
proc sql; select count(*) into :total_rows from raw_data; quit; /* Step 3: Calculate the number of rows per file */ %let files_count = 30; %let rows_per_file = %eval(&total_rows / &files_count); /* Step 4: Split the dataset into 30 smaller datasets */ %macro split_dataset; %do i = 1 %to &files_count; data new_data&i; set raw.raw_data(firstobs = %eval((&i - 1) * &rows_per_file + 1) obs = %eval(&i * &rows_per_file)); run; %end; %mend; %split_dataset;
Your proc sql code reads the entire data set RAW_DATA just to get a count of observations. But you can use PROC SQL to read the metadata (dictionary.tables in this case) to get the same information - no need to pass through the data:
proc sql noprint;
select nobs into :total_rows from dictionary.tables where libname='WORK' and memname='RAW_DATA' ;
quit;
In fact, you can even use it to directly generate ROWS_PER_FILE, as in:
%let files_count=30;
proc sql noprint;
select nobs/&files_count into :rows_per_file from dictionary.tables where libname='WORK' and memname='RAW_DATA' ;
quit;
Also, instead of 30 data steps, consider one data step writing 30 data sets, as in:
options mprint ;
%macro split_data;
data
%do i = 1 %to &files_count; new_data&i %end ;
;
set have;
%do i = 1 %to %eval(&files_count - 1) ;
if _n_ <= %sysevalf(&i * &rows_per_file) then output new_data&i ; else
%end;
output new_data&files_count ;
run;
%mend;
%split_data ;
See if that saves time.
Splitting the data will not make your process faster, it will make it slower overall.
A split only makes sense if you cannot process everything in one take.
Please show the code of your slow-running process, if there is a place for improvement, it's there.
Agreeing with the others, splitting the data is probably a bad idea in this case. The programming effort is large, the gains in speed (if there are any) are offset by the difficulties of doing any analysis of the partitioned data sets. (Also, I don't think SAS uses a lot of memory when processing large data sets, although there are exceptions like PROC IML and a couple of others).
Please tell us how large your data set is: we need to know the number of rows and number of columns.
Please tell us what analysis (example: statistics, graphics, reporting) you plan to do after you had split the data set. Please be specific.
8292402Hi Paige, and everyone
sorry I do not know how to reply in a way that everyone can see.
Essentially what I want to do is to identify the users who stay in one firm for less than one year (so my data is career history data).
data user10_ind0_tag; set user10_ind0; start_date_dt = input(startdate, yymmdd10.); end_date_dt = input(enddate, yymmdd10.); enddate_imputed = coalesce(end_date_dt, input('2022-12-31', yymmdd10.)); duration_in_days = intck('DAY', start_date_dt, enddate_imputed); miss_end = missing(enddate); miss_start = missing(startdate); turnover = (duration_in_days < 365); run;
I have 16 files, each of it is 40GB. each of it has around 82924025 rows, and i have the following columns: uid, pid, company name raw, company url, company cleaned name, company priname, company name, ultimate company name (for various company name, i will just keep raw name, cleaned name, and ultimate company name), location raw, region, country, state, mas, startdate, enddate, jobtitle raw, mapped_role, job category, role_k150,role_k500,rolek1000, code1, code 2 (used to linked to external data), ticker, wexchange, naics, naics_desc, rcid, frcid, senority, rn, salary.
I would like to keep even the raw variable to verify the data accuracy, because sometimes it has one worker write their job as independent company, which means they are self-employeed, but the data provider put this guy in a company called independent inc. I keep the raw in order to identify such cases.
I try to cut into small pieces because the other user on the server is taken 70%-98% of the memory, I felt sometimes my code is not processing because my data cannot be read into the remaining memory, thus I try to make them smaller.
I appreciate all your help further.
Join us for SAS Innovate 2025, our biggest and most exciting global event of the year, in Orlando, FL, from May 6-9.
Early bird rate extended! Save $200 when you sign up by March 31.
Learn how use the CAT functions in SAS to join values from multiple variables into a single value.
Find more tutorials on the SAS Users YouTube channel.
Ready to level-up your skills? Choose your own adventure.