01-02-2016 08:34 PM
I have two questions. I keep running into insufficient resources when using GLMSelect to investigate a file with 1M records, 5 categorical covariates with split options and 1 by variable. If I replace the by statement with a macro, will this reduce the computational requirements and increase the probability the code will run? Or are there just too many covariates with split options?
I can't increase the sumsize which is 1GB. Any other suggestions?
01-02-2016 09:45 PM
Switching to macro language will never reduce the resources needed.
If it is mathematically possible, you might lower the resource requirement by changing another variable, and turning it into a second BY variable.
01-03-2016 01:19 AM
If at all feasable, you could reduce the number of categories in some of your variables by merging similar categories.
If all else fails, you could run the model selection procedure on a subsample your data.
01-04-2016 11:16 AM
Have you considered subsetting your data into, say, 10 subsets, each with approx. 100K records, using the MODELAVERAGE statement on each of these, and then comparing the results across the subsets (sort of a doubly averaged model) to get the selected variables, and then fitting the full dataset to the selected variables? An adaptive LASSO method for variable selection would enable you to do this. See Example 49.5 Model Averaging in the SAS/STAT14.1 documentation as a starting point.