Desktop productivity for business analysts and programmers


Not applicable
Posts: 0



I am doing logistic regression and I used GLIMMIX procedure to be able to incorporate random effects in my analysis. I also used AIC to choose the best model over candidate ones. I found strange results with AIC from GLIMMIX. When I compare the analysis in GENMOD and in GLIMMIX I found different best models.

Did someone noticed that there is some problems in AIC computed with GLIMMIX?

Frequent Contributor
Posts: 136


Posted in reply to deleted_user

Hi Ari.

the problem is that GENMOD and GLIMMIX (often) use fundamentally different (but nonetheless useful and appropriate) underlying likelihoods to estimate parameters.

To have AICs that you can use to compare models from GLIMMIX you need to stick to laplace or quadrature methods, which means no R-side (only G-side) random effects specifications, which sounds like what you have in your data.

I find it helpful to compare procedure outputs in baby steps, such as:

1. compare fixed effect only, (GLMs) between GENMOD and GLIMMIX with default method. LL and associated AIC _might_ be same - sorry I'd have to check. The fixed parameter estimate should be the same to 4 decimal places at least. Ignore estimate P-values at this stage. This confirms you are specifying equivalent GLM models.

2. extend the Glimmix GLM  to a GLMM with just 1 random effect (G-side, not R-side). Choosing a non- or least- 'controversial' effect is one strategy, choosing a completely non-significant but harmless random effect with 1 DOF is another. Choose method=laplace or method=quad with subject= specification as the base or reference model for further AIC comparisons. The LL will change markedly to that associated with the laplace approximation of the original LL. The approximation is generally good if you have at least 2 observations per subject.

3. extend GLMM (add more random effects specs, experiment with appropriate effects and covariance structures such as vc, un, chol, unr, et c.) and compare AICs (or AICCs) with the base model in 2.

4. repeat 3 until happy that a relatively good model is found

Ask a Question
Discussion stats
  • 1 reply
  • 2 in conversation