- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Dear Gurus,
I have a dataset contains 21 million observations with 1500 variables.
I just want to make sum of 500 variables using two index variables. (say, product_code and date).
I still can finish the manipulation by using proc means by spending 30 minutes but just wondering
if any of you experienced proc sql with group by can be faster or not?
I read some of articles about proc means vs proc sql but not very sure for the big data with many numbers of variables.
By the way, the data set is native SAS dataset and not coming from Oracle or any other database.
Any opinion from will be appreciated.
Thank you,
Kaz
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Whatever procedure you're using it will have to read the data and to sort the data for summing by groups.
Both Proc SQL and Proc Means are thread-enabled.
I don't expect Proc SQL to outperform Proc Means for this task.
What you probably want to do is to ensure that multi-threading can get used where possible (=options set to allow for multi-threading).
I would expect this process to be I/O bound so what certainly will impact on performance is how your SAS WORK and UTILLOC area (used for sorting) are set up (but that's something only an admin can change).
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Whatever procedure you're using it will have to read the data and to sort the data for summing by groups.
Both Proc SQL and Proc Means are thread-enabled.
I don't expect Proc SQL to outperform Proc Means for this task.
What you probably want to do is to ensure that multi-threading can get used where possible (=options set to allow for multi-threading).
I would expect this process to be I/O bound so what certainly will impact on performance is how your SAS WORK and UTILLOC area (used for sorting) are set up (but that's something only an admin can change).
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Patrick,
thanks very much for your quick reply.
I read through and found the right key word for the option.
also thanks for not wasting time for replacing codes by proc sql still taking around 30 minutes.
Kaz
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
30 minutes doesn't feel that bad for the volumes you're dealing with: 8 Bytes for a numeric variable * 500 variables * 21M rows add up to more than 78GB.
I'd expect the main bottleneck to be I/O which is nothing you can do much about it except to reduce volumes as fast as you can and to minimize passes through the data while the volumes are high.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Yes. It's not bad at all.
Recently many guys says Python Python Python everyday and off course I used it for certain purposes but
for big data manipulations like this, I think SAS still holds significatn advantage to other languages.
Together with macro facility, it allows so much freedom.
Thanks anyway,
Kaz
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Add some options to fast your code.
option bufno=100 bufsize=128k cpucount=12 threads;
You also could try PROC TABULATE.
@Chris_NewZ claim PROC TABULATE is faster than MEANS or SQL at somewhere.
ods select none;
proc tabulate data=sashelp.class out=want ;
var age weight height;
table (age weight height)*sum;
run;
ods select all;
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for the parameter setting and
I totally forgot tabulate procedure which more flexible functions than means.
Kaz