BookmarkSubscribeRSS Feed
MichaelLichter
Calcite | Level 5

I have time to event data with clustered observations, so I am using proc phreg like so:

proc phreg data = xxx covs(aggregate);

     by byvar;

     class cluster category;

     model month * status(0) = pred cluster category / ties = efron;

     id cluster;

run;

My problem is that when I run this model, I get this output:

Testing Global Null Hypothesis: BETA=0

Test                    Chi-Square       DF Pr > ChiSq

Likelihood Ratio          214.2638       32         <.0001

Score (Model-Based)       375.3325       32         <.0001

Score (Sandwich)           21.0000       21         0.4589

Wald (Model-Based)        231.4213       32         <.0001

Wald (Sandwich)         1.13909E11       21         <.0001

The Wald(Sandwich) Chi-Square is huge and significant; the Score(Sandwich) is small and not anywhere near significant. Is it possible there's something wrong with the Score(Sandwich)? Or the Wald(Sandwich)?

Can anybody help with interpretation here?

By the way, I initially had problems with this model getting a divide-by-zero error for one of the two bygroups when I used "/ ties = exact". I switched to "/ ties = efron", which does not give me problems. Still, I wonder if this means I have problematic patterns in the data that could be responsible for the widely divergent test statistics above.

Also, FWIW, I was wondering if this discrepancy had anything to do with the inclusion of the cluster variable (which has roughly n = 20 categories) in the model. Indeed, removing the variable from the model statement substantially reduces the size of the Wald(Sandwich) chi-square (which remains significant) while cutting the p-value of the Score(Sandwich) by about 75% (which leaves it still non-significant).

Answers, suggestions, and questions all welcome.

5 REPLIES 5
SteveDenham
Jade | Level 19

By any chance, do some values of category not appear in all clusters?  That would at least explain what is going on when you drop the cluster variable from the model.

Steve Denham

MichaelLichter
Calcite | Level 5

Steve, that is correct. For legitimate reasons, not all categories were present in all clusters.

SteveDenham
Jade | Level 19

Thanks.  Now I definitely vote for "problematic patterns in the data that could be responsible for the widely divergent test statistics above."  It could be that the partial likelihood for some clusters is such that the martingale residual under TIES=EFRON is quite large.  I don't really have a good work around--the first is to look at the values under TIES=BRESLOW, but I bet they show the same pattern.  You may have to really dig into the responses in each cluster and the metadata for the clusters to see whether clusters can be consolidated (or removed, although that seems extreme).

If there are structural reasons that not all categories are present in all clusters, what about separating into 2 (or maybe more analyses) by "super-clusters" that have common categories?

Steve Denham

MichaelLichter
Calcite | Level 5

Thanks, Steve. TIES=BRESLOW produces the same results. I haven't yet had time to look at consolidating similar clusters, but that is worth looking into.

SteveDenham
Jade | Level 19

The key will be to consolidate based on metadata, not on the design or response data.  Otherwise, you just end up with fewer but larger clusters with the same problem.

Steve Denham

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 5 replies
  • 1694 views
  • 4 likes
  • 2 in conversation