turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- Score test and Wald test show widely discrepant re...

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

07-30-2015 02:01 PM

I have time to event data with clustered observations, so I am using proc phreg like so:

proc phreg data = xxx covs(aggregate);

by byvar;

class cluster category;

model month * status(0) = pred cluster category / ties = efron;

id cluster;

run;

My problem is that when I run this model, I get this output:

Testing Global Null Hypothesis: BETA=0

Test Chi-Square DF Pr > ChiSq

Likelihood Ratio 214.2638 32 <.0001

Score (Model-Based) 375.3325 32 <.0001

Score (Sandwich) 21.0000 21 0.4589

Wald (Model-Based) 231.4213 32 <.0001

Wald (Sandwich) 1.13909E11 21 <.0001

The Wald(Sandwich) Chi-Square is huge and significant; the Score(Sandwich) is small and not anywhere near significant. Is it possible there's something wrong with the Score(Sandwich)? Or the Wald(Sandwich)?

Can anybody help with interpretation here?

By the way, I initially had problems with this model getting a divide-by-zero error for one of the two bygroups when I used "/ ties = exact". I switched to "/ ties = efron", which does not give me problems. Still, I wonder if this means I have problematic patterns in the data that could be responsible for the widely divergent test statistics above.

Also, FWIW, I was wondering if this discrepancy had anything to do with the inclusion of the cluster variable (which has roughly n = 20 categories) in the model. Indeed, removing the variable from the model statement substantially reduces the size of the Wald(Sandwich) chi-square (which remains significant) while cutting the p-value of the Score(Sandwich) by about 75% (which leaves it still non-significant).

Answers, suggestions, and questions all welcome.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to MichaelLichter

07-30-2015 03:48 PM

By any chance, do some values of category not appear in all clusters? That would at least explain what is going on when you drop the cluster variable from the model.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

07-30-2015 04:13 PM

Steve, that is correct. For legitimate reasons, not all categories were present in all clusters.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to MichaelLichter

07-31-2015 10:00 AM

Thanks. Now I definitely vote for "problematic patterns in the data that could be responsible for the widely divergent test statistics above." It could be that the partial likelihood for some clusters is such that the martingale residual under TIES=EFRON is quite large. I don't really have a good work around--the first is to look at the values under TIES=BRESLOW, but I bet they show the same pattern. You may have to really dig into the responses in each cluster and the metadata for the clusters to see whether clusters can be consolidated (or removed, although that seems extreme).

If there are structural reasons that not all categories are present in all clusters, what about separating into 2 (or maybe more analyses) by "super-clusters" that have common categories?

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

08-03-2015 05:20 PM

Thanks, Steve. TIES=BRESLOW produces the same results. I haven't yet had time to look at consolidating similar clusters, but that is worth looking into.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to MichaelLichter

08-04-2015 08:47 AM

The key will be to consolidate based on metadata, not on the design or response data. Otherwise, you just end up with fewer but larger clusters with the same problem.

Steve Denham