BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
Jack2012
Obsidian | Level 7

All, Recenly, I am learning the PROC SEQDESIGN procedure via book Modern Approaches in clinical trials using SAS authored by Sandeep and Zink, but found the columns could not be replicated 

The code for creating the last 3 columns are as below: 

ods graphics on; 
proc seqdesign altref=0.19 errspend pss(cref=0 0.5 1) stopprob(cref=0 0.5 1) plots=(asn power errspend) boundaryscale=stdZ;	
	OneSidedErrorSpending: design nstages=2 method(alpha)=ERRFUNCGAMMA(GAMMA=-4) ALT=UPPER STOP=REJECT/*REJECT NULL STAND FOR STOP due to efficacy*/
	alpha=0.025
	beta=0.2
	info=cum(1 2);
	
	samplesize model=twosamplemean(stddev=1 weight=1);
run;
ods graphics off; 

After running such code, there is no results match the numbers in the shot from the book below.

For instance, the results generated by above code gives the size will be 870 not 799 as shown in the book.

I consulted the authors as well, but seems not reply. 

Anybody could help me to figure out this? THank you in advance. 

 

Jack2012_0-1615511729995.png

 

1 ACCEPTED SOLUTION

Accepted Solutions
FreelanceReinh
Jade | Level 19

@Jack2012 wrote:
(...) Why set the different beta values. And still can't connect these values. Generally, we specify a fixed power, i.e. 1-beta, then calculate the size and the corresponding "Early stop probability for efficacy at stage 1", but now this logic seems not workable.

To me it looks like the authors tried to replicate the power values (0.80, 0.84, ..., 0.96 -- possibly, rounded to three decimals, these were really 0.799, 0.837, ..., 0.957?) from the "Classical Design with pessimistic d" using the "Group Sequential Design with pessimistic d" in order to obtain "Average Sample Size" values comparable to the (constant) "Total Sample Size" of the classical design.

View solution in original post

6 REPLIES 6
Jack2012
Obsidian | Level 7
Any PROC SEQDESIGN user could help me ? Thank you in advance.
ballardw
Super User

I think you need to closely at your output.

When I run your code I see this as part of the output. I think you are confusing "Max Sample Size" from your output with "Average Sample size" in the publication. They apparently changed the label from "Expected Sample Size"

 


Sample Size Summary
Test Two-Sample Means
Mean Difference 0.19
Standard Deviation 1
Max Sample Size 877.9626
Expected Sample Size (Null Ref) 876.6544
Expected Sample Size (Alt Ref) 779.7337
Weight (Group A) 1
Weight (Group B) 1
Jack2012
Obsidian | Level 7
Thank you very much for help on this. However, the results does not show size is 799 as in the publication. And from the results, I can see the probability of rejecting NULL under alternative is 0.22377, instead of 0.163 as shown in the publication.

Further help is still needed. Thank you .
FreelanceReinh
Jade | Level 19

Hello @Jack2012,

 

I'm not an expert on PROC SEQDESIGN, so not sure if this helps:

On page 89 of the book (as seen in Google Books) it says: "Example Code 3.3 on page 89 shows how this can be done in SAS using method (alpha)=ERRFUNCOBF or using method (alpha)=ERRFUNCGAMMA(GAMMA-4) [sic! missing "=" sign before "-4"] in proc seqdesign [32]."

 

After replacing ERRFUNCGAMMA(GAMMA=-4) in your code with ERRFUNCOBF and using beta=1-Power (from the highlighted part of your screenshot), i.e., beta=0.201 for altref=0.19 (beta=0.163 for altref=0.20, etc.), I get Stopping probability (Stage_1) and Expected Sample Size (Alt Ref) (almost) equal to the values in your table and the same holds for the other altref values (0.20, 0.21, ..., 0.25). The rounding error in beta might explain the remaining small differences.

Jack2012
Obsidian | Level 7
Great help for me to understand this. Thank you!

However, still there is a gap for me to understand the logic on how to get this. Why set the different beta values. And still can't connect these values. Generally, we specify a fixed power, i.e. 1-beta, then calculate the size and the corresponding "Early stop probability for efficacy at stage 1", but now this logic seems not workable.

FreelanceReinh
Jade | Level 19

@Jack2012 wrote:
(...) Why set the different beta values. And still can't connect these values. Generally, we specify a fixed power, i.e. 1-beta, then calculate the size and the corresponding "Early stop probability for efficacy at stage 1", but now this logic seems not workable.

To me it looks like the authors tried to replicate the power values (0.80, 0.84, ..., 0.96 -- possibly, rounded to three decimals, these were really 0.799, 0.837, ..., 0.957?) from the "Classical Design with pessimistic d" using the "Group Sequential Design with pessimistic d" in order to obtain "Average Sample Size" values comparable to the (constant) "Total Sample Size" of the classical design.

sas-innovate-2024.png

Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.

 

Register now!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 6 replies
  • 563 views
  • 4 likes
  • 3 in conversation