03-10-2013 10:27 PM
Dear all ,
I have a dataset in csv format. I am looking for a way/tool to randomly done by dividing 70% of the database for training and 30% for testing , in order to guarantee that both subsets are random samples from the same distribution. I adopt 70% - 30% because it seems to be a common rule of thumb.
Any suggestions / methods / guide ? or the use of EG ? EM ?
03-11-2013 10:01 AM
Those statements would be added to a datastep to create a new character variable called set that would take the value TRAINING randomly for 70% of observations and the value TESTING otherwise.
04-09-2013 11:22 AM
Well, if you have EM, then splitting the data into Training and Testing is trivial. The feature is a default feature when creating your SAS data in EM. You can also use a Data Partition Node.
04-09-2013 12:02 PM
If you're really interested in splitting a csv file into two csv files, there is no need to create a SAS data set along the way. Here's one approach:
filename csvfile 'path to existing csv file';
filename train 'path to a training subset';
filename test 'path to a testing subset';
if ranuni(12345) < 0.7 then file train;
else file test;
The drawback is that you will get approximately 70/30, not exact. If you really want to create a SAS data set from the csv file first, there are many alternatives including PROC SURVEYSELECT.
Need further help from the community? Please ask a new question.