turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- SAS Programming
- /
- Base SAS Programming
- /
- How to randomly select X no of obs randonly from a...

Topic Options

- RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

12-26-2012 11:57 AM

Hi,

I have a dataset of X no of observations and I want to select Y no of records** randomly**. I know that firstobs,obs OR ranuni can be used but wondering if there is any other better way to do it.Because firstobs,obs doesn't give true random sample also ranuni requires manual input of cuts like random < 0.2 etc.,. In the below example I want to select 5125 records randomly from dataset sample.

%Let obs = 5125;

Data abc;

Set sample(firstobs = 1 obs = &obs.);

Run;

Thanks in advance for your help!

Accepted Solutions

Solution

12-26-2012
03:21 PM

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

12-26-2012 03:21 PM

: I like John's approach but, from my limited tests, John's code appears to either introduce a bias of some kind or at least doesn't select the first N records based on the random seed used.

I don't recommend the 3rd approach, below, but included it to compare with the other two methods. The 2nd and 3rd methods appear to consistently provide the same results regardless of seed or sample size specified. The first method, though, appears to always deviate slightly:

%let ssize=10;

%let seed=5;

data bigdata;

set sashelp.class;

do _n_=1 to 10;

recnum+1;

randnum=ranuni(&seed.);

output;

end;

run;

data sample (drop = k);

k = &ssize. ; /* specify sample size required */

if 0 then set bigdata nobs = n ; /* get nobs, without reading anything */

do i = 1 to n while (k > 0) ;

if ranuni(&seed.) < k/n then do;

k = k-1;

set bigdata point = i ;

output ;

end ;

n=n-1 ;

end ;

stop ;

run ;

proc sql OUTOBS=&ssize. ;

create table sample2 as

select *

from bigdata

order by RANUNI(&seed.)

;

quit;

proc sort data=bigdata

out=sample3;

by randnum;

run;

data sample3;

set sample3;

if _n_ le &ssize.;

run;

All Replies

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to vicky07

12-26-2012 12:13 PM

If you have SAS/STAT, you may find proc surveyselect coming in handy:

for your case, if non-replacement :

proc surveyselect data=sample method=srs n=5125

out=abc;

run;

Haikuo

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Haikuo

12-26-2012 12:20 PM

As was suggested, if you have PROC SURVEYSELECT available, that's what it was built for. But if not, there are a few ways to go about it (all of which require entering no more than the number 5125). Here are a few questions that would become important:

1. Is your data set large so that speed becomes important?

2. Do you need exactly 5,125 observations, or would approximately 5,125 be acceptable?

3. Can the same observation be selected more than once?

4. What should happen if the data set contains fewer than 5,125 observations?

Even if you need exactly 5,125 randomly selected unique observations, using one pass through the data, this can be done. The programming would involve one short but somewhat complex DATA step.

Good luck.

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to vicky07

12-26-2012 12:31 PM

If you don't have sas/stat, you can also do it quite easily with proc sql. e.g.:

proc sql OUTOBS=5125 ;

create table abc as

select A.*

from sample as A

order by RANUNI(0)

;

quit;

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to vicky07

12-26-2012 01:13 PM

This is a common question. The most efficient method I have seen is done using a data step and uses the POINT= option on a set statement so that only the selected observations are read from the source dataset. For an example of this code look at this posting from John Whittington in 1998.

http://listserv.uga.edu/cgi-bin/wa?A2=ind9810A&L=sas-l&D=0&P=2569

Solution

12-26-2012
03:21 PM

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

12-26-2012 03:21 PM

: I like John's approach but, from my limited tests, John's code appears to either introduce a bias of some kind or at least doesn't select the first N records based on the random seed used.

I don't recommend the 3rd approach, below, but included it to compare with the other two methods. The 2nd and 3rd methods appear to consistently provide the same results regardless of seed or sample size specified. The first method, though, appears to always deviate slightly:

%let ssize=10;

%let seed=5;

data bigdata;

set sashelp.class;

do _n_=1 to 10;

recnum+1;

randnum=ranuni(&seed.);

output;

end;

run;

data sample (drop = k);

k = &ssize. ; /* specify sample size required */

if 0 then set bigdata nobs = n ; /* get nobs, without reading anything */

do i = 1 to n while (k > 0) ;

if ranuni(&seed.) < k/n then do;

k = k-1;

set bigdata point = i ;

output ;

end ;

n=n-1 ;

end ;

stop ;

run ;

proc sql OUTOBS=&ssize. ;

create table sample2 as

select *

from bigdata

order by RANUNI(&seed.)

;

quit;

proc sort data=bigdata

out=sample3;

by randnum;

run;

data sample3;

set sample3;

if _n_ le &ssize.;

run;

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to art297

12-26-2012 05:23 PM

You will get a different list of observations using the K/N method versus the assign a random number and then sort method. But it has been proven mathematically that each observation will have the same probability of selection.

The big advantage is when N is large relative to K as the POINT= operation will only load the needed observations. The other methods require reading all of the observations off of the disk at least once. I have used datasets with millions of observations that take hours to process sequentially, but I can take a sample of a couple of thousand observations in seconds.

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

12-26-2012 06:29 PM

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to art297

12-26-2012 06:51 PM

See a proof reposted in 2001 by Dr. John W. : http://listserv.uga.edu/cgi-bin/wa?A2=ind0105B&L=sas-l&P=R20114&D=0

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to vicky07

12-26-2012 11:22 PM

Thank you all for your reply. I have a 3 million dataset and tested all the 3 methods listed by Arthur and K/N method runs much faster compared the other two. Thanks again!