I'm wondering why SAS v9.4 proc seqdesign is giving me a substantively different sample size than EAST6. The clinical trial design for both is as follows:
Type I error: 0.05
Type II error: 0.10
one sample test of proportions: null = 0.10, alternative = 0.265
upper test of the alternative (proportion > 0.265
reference proportion is the null (proportion = 0.10)
stop for both futility or efficacy; binding futility boundary
two stages using an error spending group sequential design with an O'Brien-Fleming type spending function
interim analysis at 50% of information
I get n=29 with SAS v9.4 proc seqdesign with these design parameters; EAST6 gives n=42. I've pasted in the SAS code I used but I do not have the EAST6 equivalent. I don't have access to EAST6 to investigate this, either.
Any help vastly appreciated!
swannie
proc seqdesign plots=boundary (hscale=samplesize) boundaryscale=mle errspend;
OneSidedOBrienFleming: design nstages=2
method=errspendobf
alt=upper stop=both (betaboundary=binding)
alpha=0.05 beta=0.10
info=equal;
samplesize model(ceiladjdesign=include) =onesamplefreq(nullprop=0.10 prop=0.265 ref=nullprop);
ods output AdjustedBoundary=Bnd_Prop4;
run;
Different formulas are used for obtaining the sample size. SEQDESIGN computes it based on the maximum information as shown in "Test for a Binomial Proportion" in the Details: Applicable One-Sample Tests and Sample Size Computation section of the SEQDESIGN documentation. I believe EAST bases their computation on a closed form power equation. Also, they might use the alternative proportion in the computation which would be more like specifying REF=PROP in SEQDESIGN.
Thanks, but could different formulas really give sample sizes that disparate?
Yep, sure is. BTW, the only reply I got to this post came in this week!
How are things at your new gig?
S
Things are good! I hear you are moving on to greener pastures, which makes sense to me. Good luck!
This is funny that I came across this post, I am running into the same problem right now, I have a sample size calc from EAST that gives me one number, and the number from proc seqdesign is much higher. We are doing a 1-sided test of proportion like your example, and the one sided test in EAST is equivalent to the 2-sided test in SAS seqdesign. I don't know what to make of it but it's driving me crazy, as I need to submit screenshots to the FDA and I don't know which to use or how to interpret the difference. Let me know if you have any info about your example that sheds any light on this.
Is this you, Suzanne?
I am having the exact same problem as you are, and I've been digging at this question for a month and I think I know the answer. I think it comes down to 2 factors. First: SAS doesn't offer the option of using an exact test for proc SEQDESIGN, whereas EAST does. So one possible difference may be attributed to the use of exact vs. a standard z-test. Second is what variance assumption you are using, as @StatDave says in his reply. This one you can control in SAS. One assumption uses the variance under the null hypothesis and the other assumption uses the variance under the alternative hypothesis. Changing that alters the necessary sample size pretty drastically I have noticed. So in the code you posted, you are using the variance based on the null hypothesis (ref=nullprop) and I betcha a nickel that in EAST you are using variance based on the alternative hypothesis (ref=prop) which will result in a larger sample size requirement. Neither is wrong, just a choice you have to make. Call me if you want to discuss - this has been driving me crazy for the past month and I ended up contacting Cytel's technical support over it, so they get the real credit. I think SAS's technical documents do say this, but in a very non-straightforward way. Hopefully they update SEQDESIGN to include an exact text and also include an example about changing the variance assumptions in their technical documents.
Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!
ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.
Find more tutorials on the SAS Users YouTube channel.