Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- SAS Programming
- /
- SAS Procedures
- /
- Meaningful GLIMMIX Fixed Intercept

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-10-2013 02:31 PM

I am profiling hospitals based on mortality. Here's the syntax:

proc glimmix data=data;

model Mortality(event=last) = <long list of covariates>

/ dist=binary link=logit Solution cl ddfm=BW;

random intercept / subject=Hospital_ID solution cl;

run;

Each hospital gets flagged based on their random intercept. I am struggling to obtain meaningful estimates of Hospital-level and Overall adjusted event rates. In a linear or logistic model the fixed intercept can be used to obtain the sample-wide event rate when co-variates are centered. It does not seem to work in GLIMMIX--the estimate is very much lower than the unadjusted sample event rate.

Looking for suggestions.

Thanks, Haris

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-11-2013 11:15 AM

Not sure if this will address the problem, but the pseudo-likelihood estimates may not be what you need. Have you tried Laplace or quadrature methods, and looked at the empirical Bayes estimates of the intercepts?

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-11-2013 11:20 AM

Hmm, I think I used Quadrature and Double-Dog but this definitely gives me something to look into. THANKS! I'll check it out and report back to the group.

Follow-up: I tested Restricted PL, Quad and LaPlace and all of them produce virtually identical results. Back to the drawing board.

Message was edited by: Haris Subacius

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-16-2013 10:59 AM

Here's the same question rephrased: **what is the interpretation of the fixed intercept in the mixed effects DIST=binary LINK=logit GLIMMIX model**?

Normally, it is the logit of event rate at the average/zero level of the covariates' vector. Does this still hold in the case of GLIMMIX? Does not seem to be the case. As soon as I start using co-variates, the intercept value drops significantly even if I use all continuous predictors re-scaled to mean=0. If the interpretation of the standard LOGISTIC model applies, why is the intercept so much lower?

**How does the presence of random effects affect the value of the fixed intercept?** Stumped here** **

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-16-2013 02:44 PM

The intercept has the same interpretation in GLIMMIX as in LOGISTIC. But I see that you did not use a

class Hospital_ID;

statement. Your results for the random-intercept model will be incorrect if Hospital_ID is not sorted prior to running GLIMMIX. If sorted, the results would be fine, but it is safer to use a class statement.

To me it makes sense that the intercept could be getting smaller as you add covariates. You should realize that the covariates likely have high correlations, and it is unlikely that the point where all covariates are simultaneously at their means will exist with your dataset.

You will need to show actual output if you need more help.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-16-2013 03:47 PM

Thanks for your reply LVM.

I did sort the IDs prior to execution. The dataset is large and SAS claims that sorting IDs and taking the variable out of the CLASS statement speeds up the execution. Hence, no CLASS statement.

What you say about high inter-correlations makes sense and the non-existence of the convergence point does as well. However, wouldn't you expect that the mathematically artificial intercept will be somewhere close to the sample-wide event rate? If you center all of your covariates at the mean, that is.

I don't understand why you would expect the intercept to get smaller? Why not larger? I don't understand the reason for deviation from the raw rate, much less the direction.

Let me know what output you would like to see. LOGISTIC regression produces an intercept of -2.9888+-.2406. GLIMMIX regression with identical set of co-variates PLUS random effect of Hospital gets me -4.1852+-.2491. That corresponds to the predicted sample event rate of 4.8% versus 1.15%. Huge difference! WHY!?

In this particular example, the variables are not centered around their means so I would not expect the rate to be near the sample-wide unadjusted rate of 10.8%. I did hope, however, that the overall rate will be similar.

Here's the syntax for LOGISTIC:

proc logistic data=analyze desc;

model Mortality = <same long list of covariates>;

run;

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-16-2013 04:31 PM

Several things to think about. First of all, you can directly compare LOGISTIC and GLIMMIX. Just take out the random statement from GLIMMIX and run. You should get the same results for the two procedures (intercept and coefficients). Second, in your reply to Steve, you said you thought you used quadrature, but your code does not show this. With random effects and binary data, it is critical that you use quadrature (method=quad), because the default RSPL (linearization) method can be *very* biased in this circumstance. The bias diminishes rapidly for binomial data as the number of trials per experimental unit goes up (but your data at 0:1, a trial size of 1). There can still be bias with quadrature, but much less so. Often, it is impossible to use quadrature because of memory/time constraints, as described in the user's guide, but try it as your first choice. If quadrature is not allowed, then use method=laplace. (These options won't affect your results when you remove the random statement, since these apply to models with random effects). Third, your reply suggests that you have convergence problems (you mentioned nonexistence of a convergence point??? Not sure what you mean). This can drastically affect your results. Are you getting convergence? What is the estimated variance (and its SE)?

My thoughts about decreasing intercept were more related to the situation where the covariates are not centered. Think of two covariates, x1 and x2. If only x1 is in the model, then the interecept is the expected response when x1=0; but x2 could be anything. Crudely, you are getting the expected response when x2 is at its center ((it is more complex than this, based on correlations of the x's and other things). The intercept would not represent the situation with x1=x2=0. Now, if you add x2, the intercept is the expected response when x1=x2=0 (with x3, x4, ... being anything). As you add x's, you are pulling the intercept towards a global x1=x2=x3=...=0 case. I think this could get more complicated with a mixture of positive and negative correlations in the X matrix. But this is all likely off the point of your problem.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-16-2013 05:18 PM

Thanks, again. Yes, I did you QUAD just took it out to simplify the post. No convergence problems, I was talking about means converging on the sample-wide level. But thanks nevertheless.

Here's what I am thinking. If an intercept Y in an unadjusted model, it should still be Y when a normally distributed and mean-centered covariate x1 is added. It should also be Y when another normally distributed and mean-centered covariate x2 is added. That should be the case regardless of beta1/beta2 and correlation between x1 and x2. Is that not correct?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-16-2013 06:11 PM

This issue is not related to why you are finding differences between GLIMMIX and LOGISTIC output. That is still unresolved. My explanation primarily involved non-centered x variables. But with generalized linear models (GLM), or generalized linear mixed models (GLMM), things are more complicated. The logit intercept (link of the expected value

) depends on what x variables are in the model, even mean-centered ones.You are not necessarily getting an estimate that directly corresponds to the arithmetic average rate.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-17-2013 02:56 PM

To back up Larry's comments: If in fact you did use method=quad, I am not surprised that the intercept moved to a more extreme value, as the estimate is conditional on the random effects, rather than marginal as is found when there are no random effects. See Stroup's *Generalized Linear Mixed Models*, and in particular the material supporting the graphic on the cover for a more detailed (and more understandable) explanation.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-17-2013 03:37 PM

I don't have the Stroup book but thanks for the nod in that direction. I use Littell's et al. SAS volume for reference but Stroup sounds like a good read.

Curious, to my mind, conditional on random effects, the intercept can move up or down just as likely. If there are a lot of outliers with small sample sizes, the overall sample event rate will move up if the outliers have smaller than average event rates and down if the outlier event rates are high.

I don't recall that event rates in my sample are related to the sample size. In fact, the raw event rate is 14.1% and the GLIMMIX fixed intercept translates into something like 13.9%--an imperceptible downward trend. It's not until I start adding the zero-centered covariates that the drift takes the intercept down as far as 1%!

That tells me that the intercept drift has little or nothing to do with the random effects per se. It's something to do with the combination between covariates and random effects. I wonder if I have unusual clustering of some covariates in some of my centers. OR, is it the case that covariates combined with random effects ALWAYS drag the intercept down.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-18-2013 02:37 PM

Based on empirical evidence only and a pretty small sample size on the number of studies, the marginal intercept will be closer to Pr(Y|X)=0.5 than the conditional intercept due to regression to the mean, at least in repeated measures analyses. (At least that is my interpretation of some of Stroup's words.)

I think you have identified the most probable cause of the downward trend, that the values of the covariates are really different by center. You may want to include center by covariate interactions as additional G-side effects. Some, if not most, will probably have a zero estimate, but it could be that some are pretty substantial and are thus affecting the intercept estimate. But that is just a guess until the analysis digs into the data.

Umm, what happens if the covariates aren't zero centered? What happens if they are zero centered BY CENTER?

Steve Denham