BookmarkSubscribeRSS Feed
Haris
Lapis Lazuli | Level 10

I am profiling hospitals based on mortality.  Here's the syntax:

     proc glimmix data=data;

       model Mortality(event=last) = <long list of covariates>

          / dist=binary link=logit Solution cl ddfm=BW;

       random intercept / subject=Hospital_ID solution cl;

     run;

Each hospital gets flagged based on their random intercept.  I am struggling to obtain meaningful estimates of Hospital-level and Overall adjusted event rates.  In a linear or logistic model the fixed intercept can be used to obtain the sample-wide event rate when co-variates are centered.  It does not seem to work in GLIMMIX--the estimate is very much lower than the unadjusted sample event rate.

Looking for suggestions.

Thanks, Haris

11 REPLIES 11
SteveDenham
Jade | Level 19

Not sure if this will address the problem, but the pseudo-likelihood estimates may not be what you need.  Have you tried Laplace or quadrature methods, and looked at the empirical Bayes estimates of the intercepts?

Steve Denham

Haris
Lapis Lazuli | Level 10

Hmm, I think I used Quadrature and Double-Dog but this definitely gives me something to look into.  THANKS!  I'll check it out and report back to the group.

Follow-up: I tested Restricted PL, Quad and LaPlace and all of them produce virtually identical results.  Back to the drawing board.

Message was edited by: Haris Subacius

Haris
Lapis Lazuli | Level 10

Here's the same question rephrased: what is the interpretation of the fixed intercept in the mixed effects DIST=binary LINK=logit GLIMMIX model?

Normally, it is the logit of event rate at the average/zero level of the covariates' vector.  Does this still hold in the case of GLIMMIX?  Does not seem to be the case.  As soon as I start using co-variates, the intercept value drops significantly even if I use all continuous predictors re-scaled to mean=0.  If the interpretation of the standard LOGISTIC model applies, why is the intercept so much lower? 

How does the presence of random effects affect the value of the fixed intercept?  Stumped here Smiley Sad

lvm
Rhodochrosite | Level 12 lvm
Rhodochrosite | Level 12

The intercept has the same interpretation in GLIMMIX as in LOGISTIC. But I see that you did not use a

     class Hospital_ID;

statement. Your results for the random-intercept model will be incorrect if Hospital_ID is not sorted prior to running GLIMMIX. If sorted, the results would be fine, but it is safer to use a class statement.

To me it makes sense that the intercept could be getting smaller as you add covariates. You should realize that the covariates likely have high correlations, and it is unlikely that the point where all covariates are simultaneously at their means will exist with your dataset.

You will need to show actual output if you need more help.

Haris
Lapis Lazuli | Level 10

Thanks for your reply LVM.

I did sort the IDs prior to execution.  The dataset is large and SAS claims that sorting IDs and taking the variable out of the CLASS statement speeds up the execution.  Hence, no CLASS statement.

What you say about high inter-correlations makes sense and the non-existence of the convergence point does as well.  However, wouldn't you expect that the mathematically artificial intercept will be somewhere close to the sample-wide event rate?  If you center all of your covariates at the mean, that is.

I don't understand why you would expect the intercept to get smaller?  Why not larger?  I don't understand the reason for deviation from the raw rate, much less the direction.

Let me know what output you would like to see.  LOGISTIC regression produces an intercept of -2.9888+-.2406.  GLIMMIX regression with identical set of co-variates PLUS random effect of Hospital gets me -4.1852+-.2491.  That corresponds to the predicted sample event rate of 4.8% versus 1.15%.  Huge difference!  WHY!?

In this particular example, the variables are not centered around their means so I would not expect the rate to be near the sample-wide unadjusted rate of 10.8%.  I did hope, however, that the overall rate will be similar.

Here's the syntax for LOGISTIC:

proc logistic data=analyze desc;

  model Mortality = <same long list of covariates>;

run;


lvm
Rhodochrosite | Level 12 lvm
Rhodochrosite | Level 12

Several things to think about. First of all, you can directly compare LOGISTIC and GLIMMIX. Just take out the random statement from GLIMMIX and run. You should get the same results for the two procedures (intercept and coefficients). Second, in your reply to Steve, you said you thought you used quadrature, but your code does not show this. With random effects and binary data, it is critical that you use quadrature (method=quad), because the default RSPL (linearization) method can be very biased in this circumstance. The bias diminishes rapidly for binomial data as the number of trials per experimental unit goes up (but your data at 0:1, a trial size of 1). There can still be bias with quadrature, but much less so. Often, it is impossible to use quadrature because of memory/time constraints, as described in the user's guide, but try it as your first choice. If quadrature is not allowed, then use method=laplace. (These options won't affect your results when you remove the random statement, since these apply to models with random effects). Third, your reply suggests that you have convergence problems  (you mentioned nonexistence of a convergence point??? Not sure what you mean). This can drastically affect your results. Are you getting convergence? What is the estimated variance (and its SE)?

My thoughts about decreasing intercept were more related to the situation where the covariates are not centered. Think of two covariates, x1 and x2. If only x1 is in the model, then the interecept is the expected response when x1=0; but x2 could be anything. Crudely, you are getting the expected response when x2 is at its center ((it is more complex than this, based on correlations of the x's  and other things). The intercept would not represent the situation with x1=x2=0. Now, if you add x2, the intercept is the expected response when x1=x2=0 (with x3, x4, ... being anything). As you add x's, you are pulling the intercept towards a global x1=x2=x3=...=0 case. I think this could get more complicated with a mixture of positive and negative correlations in the X matrix. But this is all likely off the point of your problem.

Haris
Lapis Lazuli | Level 10

Thanks, again.  Yes, I did you QUAD just took it out to simplify the post.  No convergence problems, I was talking about means converging on the sample-wide level.  But thanks nevertheless.

Here's what I am thinking.  If an intercept Y in an unadjusted model, it should still be Y when a normally distributed and mean-centered covariate x1 is added.  It should also be Y when another normally distributed and mean-centered covariate x2 is added.  That should be the case regardless of beta1/beta2 and correlation between x1 and x2.  Is that not correct?

lvm
Rhodochrosite | Level 12 lvm
Rhodochrosite | Level 12

This issue is not related to why you are finding differences between GLIMMIX and LOGISTIC output. That is still unresolved. My explanation primarily involved non-centered x variables. But with generalized linear models (GLM), or generalized linear mixed models (GLMM), things are more complicated. The logit intercept (link of the expected value

) depends on what x variables are in the model, even mean-centered ones.You are not necessarily getting an estimate that directly corresponds to the arithmetic average rate.

SteveDenham
Jade | Level 19

To back up Larry's comments:  If in fact you did use method=quad, I am not surprised that the intercept moved to a more extreme value, as the estimate is conditional on the random effects, rather than marginal as is found when there are no random effects.  See Stroup's Generalized Linear Mixed Models, and in particular the material supporting the graphic on the cover for a more detailed (and more understandable) explanation.

Steve Denham

Haris
Lapis Lazuli | Level 10

I don't have the Stroup book but thanks for the nod in that direction.  I use Littell's et al. SAS volume for reference but Stroup sounds like a good read.

Curious, to my mind, conditional on random effects, the intercept can move up or down just as likely.  If there are a lot of outliers with small sample sizes, the overall sample event rate will move up if the outliers have smaller than average event rates and down if the outlier event rates are high.

I don't recall that event rates in my sample are related to the sample size.  In fact, the raw event rate is 14.1% and the GLIMMIX fixed intercept translates into something like 13.9%--an imperceptible downward trend.  It's not until I start adding the zero-centered covariates that the drift takes the intercept down as far as 1%!

That tells me that the intercept drift has little or nothing to do with the random effects per se.  It's something to do with the combination between covariates and random effects.  I wonder if I have unusual clustering of some covariates in some of my centers.  OR, is it the case that covariates combined with random effects ALWAYS drag the intercept down.

SteveDenham
Jade | Level 19

Based on empirical evidence only and a pretty small sample size on the number of studies, the marginal intercept will be closer to Pr(Y|X)=0.5 than the conditional intercept due to regression to the mean, at least in repeated measures analyses. (At least that is my interpretation of some of Stroup's words.)

I think you have identified the most probable cause of the downward trend, that the values of the covariates are really different by center.  You may want to include center by covariate interactions as additional G-side effects.  Some, if not most, will probably have a zero estimate, but it could be that some are pretty substantial and are thus affecting the intercept estimate.  But that is just a guess until the analysis digs into the data.

Umm, what happens if the covariates aren't zero centered?  What happens if they are zero centered BY CENTER?

Steve Denham

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

What is Bayesian Analysis?

Learn the difference between classical and Bayesian statistical approaches and see a few PROC examples to perform Bayesian analysis in this video.

Find more tutorials on the SAS Users YouTube channel.

Click image to register for webinarClick image to register for webinar

Classroom Training Available!

Select SAS Training centers are offering in-person courses. View upcoming courses for:

View all other training opportunities.

Discussion stats
  • 11 replies
  • 1611 views
  • 0 likes
  • 3 in conversation