turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- Survival analysis with repeated measures and rando...

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

11-07-2013 09:03 AM

Hello all,

I am interested in analyzing data with a time to event response variable.

My response variable measures time until a treatment succeeds to do what it is meant to be doing. My response variable (Y) gets the values: 0,1,2,3,4,5 and 10 min (times when the status is being checked).

After 10 minutes, if the treatment did not work, it is a failure, some sort of censoring. Naturally, lower values of Y are better.

In this dataset there are two treatments, a new interventional and a control, which is the standard of care. The main question is comparing the two treatments, to find superiority of the new one over the existing one.

Each patient enrolled for this trial, received the above procedure once or more. It is most frequent to find patients with either 1 procedure or 2. It is rare, however not impossible, to see even 3 procedures. All procedures within a patient are treated with the same treatment, either the new one or the control.

There are two types of these procedures, I'll call them A and B. Every patient is having one of these two, other possible procedure exist, but were not chosen for this trial. In other words, every patient have 1, 2 and rarely 3 procedures, all from the same type, either A or B, and the treatment is either the new or the control. In each procedure, Y is measured like mentioned above. The correlation within a patient is assumed to be high. The trial is also multi-center.

Summary:

Y - time to event

X1 - treatment - fixed factor

Z1 - procedure type - random factor

Z2 - center - random factor

Subject - repeated measures within a patient

How would you analyze this kind of data using SAS 9.3/9.4 ? I was thinking about GLMM and PROC GLIMMIX, but not sure how to setup the code

and more importantly, the rationale. Is this a nested design, blocked ?

And one more question, perhaps harder. If you had to plan something like this, which approach would you use for the power and sample size calculations ?

Thank you in advance

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

11-07-2013 09:36 AM

The hard question is actually not that difficult, once we decide on the analysis - simulation is going to be the only reasonable way to get at power or sample size.

Now on to the hard part (or what I would consider the hard part). The distributions available to GLIMMIX do not include truncated/censored distributions. Take a look at Example 64.5 Failure Time and Frailty Model in the NLMIXED documentation for some initial stabs at the code. I think the repeated nature is going to have to be modeled as clustered data by patient, rather than a true repeated measure. If the correlations are expected to be high, then the order should not make much difference. Center is certainly a random factor. I would call this a blocked design, with all treatments randomly assigned in each block(=center).

If all this works, then simulate 1000 to 10000 datasets per sample size, analyze, and look at the power. 's blog and book will be invaluable for doing this.

Steve Denham