Hi, I want to run a power analysis using Proc Power to determine my power for a project.
I have a sample size of 32 and want to know the difference in power if I use randomization ratios of 1:2 or 1:3 (placebo vs. treatment)
The issue is I do not have any reference point for how different my two groups will be, so I can't populate MEANDIFF in the Proc Power statement. So instead I wanted to use standard deviations, if the 2 groups are half a standard deviation apart, 1 standard deviation apart, 0.5 standard deviations apart. Does that seem feasible, or is there a better approach? This below proc power doesn't run because I need GROUPMEANS or MEANDIFF, which I don't have...
proc power;
twosamplemeans test=diff
groupstddevs = 1| 2
ntotal = 32
power = .
groupweights=(3,1);
ods output output=output;
run;
Try running with a variety of MEANDIFF values, and examining the power estimates. If you look at acceptable power levels (say 0.8), then you could see what the detectable difference associated with that power level. Try this code:
proc power;
twosamplemeans test=diff
groupstddevs = 1| 2
ntotal = 32
meandiff = 0.2 to 2.4 by 0.2
power = .
groupweights=(3,1);
ods output output=output;
run;
SteveDenham
Try running with a variety of MEANDIFF values, and examining the power estimates. If you look at acceptable power levels (say 0.8), then you could see what the detectable difference associated with that power level. Try this code:
proc power;
twosamplemeans test=diff
groupstddevs = 1| 2
ntotal = 32
meandiff = 0.2 to 2.4 by 0.2
power = .
groupweights=(3,1);
ods output output=output;
run;
SteveDenham
Hi Steve,
Thank you, that's a really good idea! I didn't think to estimate the mean difference with an 80% power level first, to see what the minimum difference could be!
Unfortunately SAS won't let me run that code without the STDDEV statement:
ERROR: The STDDEV option is required for TEST=DIFF
Do you think this is something I should assume as well and estimate a value for it?
Yes. In fact, my mean diff values can be viewed as K * stddev, if you set stddev=1. The whole idea of doing that is that no matter what the exact value of the standard deviation is, you can calculate the power if the difference to be detected (the meandiff value) is scaled. I hope that makes sense - it took me a while to learn that shortcut. It really comes in handy when you don't have pilot data.
SteveDenham
I appreciate you walking me through this, just a couple more things that I'm still a bit confused about. When you set up your meandiff=0.2 to 2.4 by 0.2, is 0.2 your standard deviation that you are choosing to use in this K*stddev formula?
meandiff = 0.2 to 2.4 by 0.2
Do I even still need the groupstddevs to show that 2 groups are half a standard deviation apart, 1 standard deviation apart, 0.5 standard deviations apart?
Those would be the K values, with the pooled standard deviation set to 1. I am trying to get this on the basis of a normal(0,1) distribution. Now with unequal standard deviations, this would be trickier. First, you must change to TEST=DIFF_SATT to get unequal variances per group. Then, in the output, you'll see for each value of MEANDIFF a value called actual alpha and a power. The actual alpha is greater than the nominal 0.05 because Satterthwaite's approximation introduces a bias. In this case the mean difference is NOT in terms of one or the other input groupstddev values, but follows the approximation found in the Details section of the PROC POWER documentation.
standard error of the difference = sqrt (sone * sone/none + stwo*stwo/ntwo), where sone is the standard deviation of group one, stwo is the standard deviation of group two, none is the number in group one, and ntwo is the number in group two. For your case, sone=1,none=24, stwo=2, ntwo=8, and standard error of the difference is sqrt(13/24)=0.74. So in this case, the mean differences are 0.2*standard error of the difference up to 2.4*standard error of the difference. You could plug in non-scaled values, if you have them.
SteveDenham
Thanks Steve for all your help! I was able to get all the necessary information I needed!
Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!
ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.
Find more tutorials on the SAS Users YouTube channel.