Can some one try it in arrays I have done in proc transpose method..
input id name$ gtype$ grade;
10 aa e 70
10 aa f 80
10 aa g 90
10 aa h 70
10 aa i 60
10 bb g 90
10 bb h 70
10 bb i 60
proc transpose data=want out=want;
by id name;
transpose the above data by using arrays
id name e f g h i
10 aa 70 80 90 70 60
10 bb . . 90 70 60
I hope this link can help both of us:
Paper 082-2013 Sharpening Your Skills in Reshaping data: PROC TRANSPOSE vs. Array Processing
Another good resource is this:
For what it's worth, I totally disagree with Art Li's conclusion. In my experience, datasetp array processing is almost ALWAYS faster than proc transpose.
That's why I published a macro a couple of years ago that accomplishes the task that way. Take a look at: http://www.sascommunity.org/wiki/A_Better_Way_to_Flip_(Transpose)_a_SAS_Dataset
If you download and run that macro, and then run:
data have; input name$ gtype$ grade; do id=1 to 500000; output; end; cards; aa e 70 aa f 80 aa g 90 aa h 70 aa i 60 bb g 90 bb h 70 bb i 60 ; run; proc sort data=have; by id name; run; proc transpose data=have out=want; by id name; var grade; id gtype; run; %transpose(data=have, out=want2, by=id name, var=grade, id=gtype);
you'll see why I question his conclusion. The disparity increases when one is expanding a wide file (2 or more variables) to be even wider.
Art, CEO, AnalystFinder.com
>For what it's worth, I totally disagree with Art Li's conclusion. In my experience, datasetp array processing is almost ALWAYS faster than proc transpose.
Writing a PROC TRANSPOSE is much faster than writing a data step array method. Plus its more dynamic and handles the addition/change of fields more easily. So if you'll get back those few minutes in processing time it's worth the array, but usually I find that's not the case. The macro does make that a non-issue though.
Not only does the macro approach to using arrays make the extra coding a non issue, when one is transposing a combination of character and numeric variables, or going from wide to wider, the macro approach still only requires a single line of code, while the coding for proc transpose creates a significant issue.
@art297 Hi Art, Thank you for the great insight. I have learned something that I wasn't sure about despite having a copy of your %transpose macro. The macro approach generally seems rather "compiler intensive" while the macro processor does so much work, the other SAS elements sleep in the input stack waiting in queue to go to the compiler. Would there be a potential overhead more than proc transpose? Am i making sense or being silly? Sorry if the latter is the case.
I'm sure if this is from you and Xia Keshan, I suppose the solution must be immaculate. I'm just curious.
Also an idea, why not incorporate the macro as an autocall macro as a part of the other list of autocall macros like %cmpress etc.
It takes less time to run (besides, for complex transpositions, much, much less time to code), so I can't see how it could be more compiler intensive. It take less cpu and processing time than running proc transpose.
It's not intended as a proc transpose replacement, as there are still things one can do with proc transpose that the macro wasn't intended to accomplish.
As for making it an autocall macro .. that is totally up to the user since any macro can be compiled and stored that way. Take a look at: http://analytics.ncsu.edu/sesug/2008/SBC-126.pdf
Art, CEO, AnalystFinder.com
Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.
If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website.
Learn the difference between classical and Bayesian statistical approaches and see a few PROC examples to perform Bayesian analysis in this video.
Find more tutorials on the SAS Users YouTube channel.