BookmarkSubscribeRSS Feed
Ruth
Fluorite | Level 6

Hi,

I have a very massive data set with over 5 million records and 30 variables. I plan to do a logistic regression analysis. Given the size of the dataset, I am thinking if I still need to split the data into calculation sample vs. validation sample.

Maybe the validation sample is not needed. Because the sample size is very large, the sampling error should be very small.

4 REPLIES 4
Doc_Duke
Rhodochrosite | Level 12

Have you looked at

http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_logistic_sec... ?

You can probably get this to run using the MULTIPASS option, though it may take days.  We routinely do logistic regressions on multi-million observation data sets.

Take a simple random sample, run it to make sure the model and result look like they should and then run the whole thing.  Depending on the capacity of your computer, it could take days to weeks to run. 

We are moving to a SAS Grid Computing environment to ease the bottlenecks, but have taken the above approach for a number of years.

Another approach would be to use the SEMMA approach of SAS Enterprise Miner.  You don't have to have the EM product to use the principles.  The things that I would worry about here is if you have categorical predictors that are both rare and highly influential. 

Doc Muhlbaier

Duke

Dale
Pyrite | Level 9

Ruth,

Let me see if I can turn your thinking around here.  You have a very large data set.  Because it is large, you should have tight confidence limits on predicted probabilities obtained from a logistic regression model fitted to these data.  Now, suppose that you had just 4 million records instead of 5 million records.  Would you still say that you would expect tight confidence limits on predicted probabilities?  Probably so.  (If not, what are the limits on a large sample size?)

Now, if 4 million records are going to produce tight confidence limits on predicted probabilities, then what would it hurt you to hold out a million records from the estimation set and use those 1 million observations for subsequent evaluation.  It would seem that you have every opportunity in the world to obtain and subsequently test a model.  What benefit would there be to you in using 5 million records for model estimation without model validation vs using 4 million records for model estimation followed by model evaluation?

Ruth
Fluorite | Level 6

Hi Dale, thanks for reply.

I am trying to interpret your answer. Do you mean that it is not necessary in my case when the sample size is extremely large? Because the size is large, the predicted confidence interval should be small. This makes validation unnecessary. Am I right?

Dale
Pyrite | Level 9

No!  Quite the opposite!  I am saying that with the volume of data that you have, there is every reason to hold out a validation sample of, say, 1 million records.  You would still have a very large sample (4 million records) for estimation.  Confidence intervals of predicted probabilities will not be much larger for a model constructed from 4 million observations compared to a model constructed from 5 million observations.  Thus, you don't lose much in the way of model estimation and you gain much by having a validation sample where you can test your model.

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 4 replies
  • 3035 views
  • 0 likes
  • 3 in conversation