08-21-2016 01:58 AM
I would be glad, if you could help me with the following performance issue.
My question is, if it were more favourable, to run the "proc model procedure" on multiple small data sets separately or to run it on a big data set, containing all the small data sets "stack" (i.e., via data statement) und run "proc model" with a "by" statement.
The scenario is the following:
I run the first simulation-loop to generate a data set, run five regressions, store the parameter estimates.
I run the second simulation-loop to generate a date set, run five regressions, store the parameter estimates.
My idea is the following:
I would like to run all simulation loops in a row, get, e.g., 5000 data sets, each with a indicator-variable (e.g. "simit") equal to the number of iteration. Then, I stag all the data sets via a data statement. In the following, I run only 5 proc model with "by simit" instead of
5 * 5000 = 25000
times proc model.
My question is, if this procedure were more efficient.
One the one hand, I do have to evoke the procedure less often, on the other hand, the data set with the by-statement might be huge.
I would be glad, if you would answer me, for I am sue, it would help me to learn, how programs are generally written more efficiently.
08-21-2016 08:34 AM
and thank you for your reply.
After reading your post, I tried to rewrite the simulation.
From former 144 minutes, I have come down to 23 minutes.
The proc model with by took
real time 20.00 seconds
cpu time 19.92 seconds
which means, that it does not work the way I have feared.
There are 5000 loops, which has meant 5000*~0,2 seconds = 1000 seconds = 16 2/3 minutes to achieve the same result with 5000 proc model statements.
Need further help from the community? Please ask a new question.