turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- How do I compare mixed models with proc glimmix

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

06-29-2017 05:13 PM - edited 06-29-2017 05:15 PM

Dear All,

I am using SAS 9.4.

I have fit three, 2-level model to my data set using Proc glimmix. as follows:

The outcome(Diabetes) and the covariate "bp" are binary variables.

Fitted models are :

1- full model with random intercept (A)

2- Full model with both random intercept and random slope(A1)

3- Model A2: excluding the intercation term from fixed level 1 covariates

I just wonder how can I compare these three models usyng SAS and tell which model fits better and are more significant

I appreciate it very much for your help,

ods output fitstatistics=fitA; title "Model A,random intercept+fixed interactions"; proc glimmix data=data1 noclprint; class group bp ; model DIABETES = bmi bp bmi*bp/ solution link=logit dist=binary ddfm=satterthwaite ; random intercept / type=un subject=group; weight WEIGHT; format bp bp.; format DIABETES DIABETES.; run;

ods output fitstatistics=fitA1;

title "Model A,random intercept+fixed interactions";

proc glimmix data=data1 noclprint;

class group bp ;

model DIABETES = bmi bp bmi*bp/ solution link=logit dist=binary ddfm=satterthwaite ;

random intercept bmi/ type=un subject=group;

weight WEIGHT;

covtest "random slope" . 0 0;

format bp bp.;

format DIABETES DIABETES.;

run;

ods output fitstatistics=fitA2;

title "Model A,random intercept+fixed interactions";

proc glimmix data=data1 noclprint;

class group bp ;

model DIABETES = bmi bp / solution link=logit dist=binary ddfm=satterthwaite ;

random intercept / type=un subject=group;

weight WEIGHT;

format bp bp.;

format DIABETES DIABETES.;

run;

Accepted Solutions

Solution

07-06-2017
07:44 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

Posted in reply to suzan

07-05-2017 10:07 PM - last edited on 07-07-2017 10:34 AM by ChrisHemedinger

You can compare AIC or AICC values for models that differ only in the random effects, when using the default REML estimation. You cannot do this for models that differ in the fixed effects. If you want to use likelihood-based comparison methods, use method=mspl in the GLIMMIX statement, which will get you ML estimation. However, ML estimation can give biased estimates of variances, which affects test statistics, etc. The bias could be small with a very large data set, but large with a small data set.

*Editor's note: see also the clarifications by @StatsMan in a later reply:*

IVM is correct if the modeling is done in MIXED or if you are using GLIMMIX with normal errors. If you have binary data, though, the default estimation uses pseudo-likelihood methods so direct comparisons of the likelihoods and likelihood based statistics between competing models is not advisable.

If you switch to METHOD=LAPLACE or METHOD=QUAD, then a direct comparison can be made.

See the Fit Statistics section of the PROC GLIMMIX documentation for details.

All Replies

Solution

07-06-2017
07:44 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

Posted in reply to suzan

07-05-2017 10:07 PM - last edited on 07-07-2017 10:34 AM by ChrisHemedinger

You can compare AIC or AICC values for models that differ only in the random effects, when using the default REML estimation. You cannot do this for models that differ in the fixed effects. If you want to use likelihood-based comparison methods, use method=mspl in the GLIMMIX statement, which will get you ML estimation. However, ML estimation can give biased estimates of variances, which affects test statistics, etc. The bias could be small with a very large data set, but large with a small data set.

*Editor's note: see also the clarifications by @StatsMan in a later reply:*

IVM is correct if the modeling is done in MIXED or if you are using GLIMMIX with normal errors. If you have binary data, though, the default estimation uses pseudo-likelihood methods so direct comparisons of the likelihoods and likelihood based statistics between competing models is not advisable.

If you switch to METHOD=LAPLACE or METHOD=QUAD, then a direct comparison can be made.

See the Fit Statistics section of the PROC GLIMMIX documentation for details.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-06-2017 07:44 PM

Thank you very much. It was really helpful.

Have tried your solution. I just wonder after adding this option into my each an every model what should I expect to see differently in the output.(I noticed that for each model, fit statistics [-2 log likelihood] is changed after adding the mspl option) .

My undersanding is that I have to compare this part of analysis for all models.Sorry for such a question! I am a begginer !

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

Posted in reply to suzan

07-07-2017 09:19 AM

IVM is correct if the modeling is done in MIXED or if you are using GLIMMIX with normal errors. If you have binary data, though, the default estimation uses pseudo-likelihood methods so direct comparisons of the likelihoods and likelihood based statistics between competing models is not advisable.

If you switch to METHOD=LAPLACE or METHOD=QUAD, then a direct comparison can be made.

See the Fit Statistics section of the PROC GLIMMIX documentation for details.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

Posted in reply to StatsMan

07-07-2017 10:53 AM

Yes, it is important to use method=laplace or method=quad for binary or binomial data. (I missed that you were dealing with binary). Both laplace and quad **are** maximum likelihood methods, so the principles I mentioned still hol. Use method=mspl for this purpose for normal distributions if you want to compare likelihoods (or the AIC and AICC statistics).

A general rule is the smaller the AIC, the better the fit. You want a change in AIC of at least 2 to consider this a better fit.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-07-2017 03:08 PM

Many thanks for your guidance. It helped big time