- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am having a bit of trouble with replicating every row in my dataset n times.
The idea is to merge 4003 subjects from 44 groups, replicated 1000 times, with 1000 rows of logistic model parameter estimates from 1000 bootstrapped samples of my original data, for each of the 4003 subjects. Each subject has1000 consecutive rows (1,1,1,...(1000 times), 2,2,2,...), and the 1000 parameter estimate rows need to be repeated (1,2,3...1000, 1,2,3,...1000, (4003 times)).
I'm going to then evaluate each subject with the 1000 logistic models, and then find the mean and se for the odds ratio CI, by group.
Any help is greatly appreciated.
Thank you,
Patrick
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Is this what you are asking?
data class(drop=i);
do i = 1 to 1000;
do j = 1 to n;
set sashelp.class nobs=n point=j;
output;
end;
end;
stop;
run;
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Are you sure you don't want to do a many to one merge instead? If you really want to make copies of observations, just used a DO loop.
data class(drop=i);
set sashelp.class;
do i = 1 to 1000;
output;
end;
run;
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
This works for creating 1000 consecutive rows of the same row, but not all 1000 rows repeated 1000 times.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Is this what you are asking?
data class(drop=i);
do i = 1 to 1000;
do j = 1 to n;
set sashelp.class nobs=n point=j;
output;
end;
end;
stop;
run;
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
YES, this worked for me:
data Project2.Beta3_repeated_4003_times(drop=i);
do i = 1 to 4003;
do j = 1 to n;
set Project2.Beta3 nobs=n point=j;
output;
end;
end;
stop;
run;
Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
@ibenp please mark the question answered by selecting the correct solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Reading a file 4003 times using the POINT= approach (i.e. direct access to records) is probably a lot slower than finding a way to let SAS use its intrinsically faster sequential access. And if your dataset is large, it's worth finding a faster approach that effectively uses SAS sequential data access engine. Beyond that, using PROC append would be more like a "bulk load" of the data.
Of course you really don't want to run 4003 iterations of PROC APPEND. And you don't have to. This program concatenates the data set to itself, meaning it doubles in size at every iteration. So if you were to ask for N=4096, you'd have 12 iterations. For N=4003, you have 11 iterations and a 12th "partial" iteration, adding a fraction of the data set to itself.
%let nreps=4003;
data _null_;
maxpower=floor(log2(&nreps));
put maxpower=;
call execute('data class;set sashelp.class;run;');
do power=1 to maxpower;
call execute('proc append base=class data=class;run;');
end;
if 0 then set sashelp.class nobs=nrecs;
obs_to_add=nrecs*(&nreps - 2**maxpower);
if obs_to_add>0 then
call execute(cats('proc append base=class data=class (obs=',obs_to_add,');run;'));
run;
The hash OUTPUT method will overwrite a SAS data set, but not append. That can be costly. Consider voting for Add a HASH object method which would append a hash object to an existing SAS data set
Would enabling PROC SORT to simultaneously output multiple datasets be useful? Then vote for
Allow PROC SORT to output multiple datasets
--------------------------