I am hoping someone can point out the error of my ways. I am trying to understand why greater power is achieved by increasing the assumed standard deviation of the predictor variable as in the example below:
proc power;
coxreg
hazardratio = 1.4
rsquare = 0.15
stddev = 1.2 2.4
ntotal = 80
eventprob = 0.8
power = .
;
run;
(This code was taken from the example provided in SAS documentation.)
Why would power be greater for a predictor that has greater variability? It seems like it should have the opposite effect...that the power should decrease.
The POWER Procedure
Cox Score Test in Proportional Hazards Regression
Fixed Scenario Elements
Method Hsieh-Lavori normal approximation
Probability of Event 0.8
R-square of Predictors 0.15
Test Hazard Ratio 1.4
Total Sample Size 80
Number of Sides 2
Alpha 0.05
Computed Power
Std
Index Dev Power
1 1.2 0.846
2 2.4 >.999
Bump...
Please see the following paper:
Hsieh, F. Y., and Lavori, P. W. (2000). “Sample-Size Calculations for the Cox Proportional Hazards Regression Model with Nonbinary Covariates.” Controlled Clinical Trials 21:552–560.
In this paper, the authors state the following:
"In a regression model, the variance of the estimate b1 of the parameter θ1 is inversely related to the variance of the corresponding covariate X1."
Therefore, the variance of the parameter estimate would get smaller as the variance of the covariate increases.
I have not yet worked through the reasoning behind this argument though.
SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!
ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.
Find more tutorials on the SAS Users YouTube channel.