Dear all, I started using Akaike's Information Criterion. One of the key papers concerned with this approach says, "AIC ranks the models in the set of alternatives; if none have merit, the models are still ranked. Thus, one needs some measure of the ‚worth' of either the global model or the model estimated to be best."... "Thus, standard statistical methods are needed to gauge this matter; these include adjusted R2, goodness-of-fit tests, and the analysis of regression residuals." (Burnham KP, Anderson DR, Huyvaert KP 2011. AIC model selection and multimodel inference in behavioral ecology: some background, observations, and comparisons. Behavioral Ecology and Sociobiology 65: 23-35) I have used the PROC MIXED (with random and fixed effects). However, so far as I know, PROC MIXED supports neither adjusted R2, nor goodness-of-fit tests. Any advice on what to use instead? Or how to convince readers of the paper that the best candidate model has merit?
... View more