I would be glad, if you could help me with the following performance issue.
My question is, if it were more favourable, to run the "proc model procedure" on multiple small data sets separately or to run it on a big data set, containing all the small data sets "stack" (i.e., via data statement) und run "proc model" with a "by" statement.
The scenario is the following: I run the first simulation-loop to generate a data set, run five regressions, store the parameter estimates.
I run the second simulation-loop to generate a date set, run five regressions, store the parameter estimates.
My idea is the following: I would like to run all simulation loops in a row, get, e.g., 5000 data sets, each with a indicator-variable (e.g. "simit") equal to the number of iteration. Then, I stag all the data sets via a data statement. In the following, I run only 5 proc model with "by simit" instead of
5 * 5000 = 25000
times proc model.
My question is, if this procedure were more efficient.
One the one hand, I do have to evoke the procedure less often, on the other hand, the data set with the by-statement might be huge.
I would be glad, if you would answer me, for I am sue, it would help me to learn, how programs are generally written more efficiently.
After reading your post, I tried to rewrite the simulation.
From former 144 minutes, I have come down to 23 minutes.
The proc model with by took
real time 20.00 seconds cpu time 19.92 seconds
which means, that it does not work the way I have feared.
There are 5000 loops, which has meant 5000*~0,2 seconds = 1000 seconds = 16 2/3 minutes to achieve the same result with 5000 proc model statements.
SAS INNOVATE 2024
Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.
If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website.