Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Home
- /
- Analytics
- /
- Stat Procs
- /
- Re: Something wrong with my PROC POWER code? Always low power for big ...

Options

- RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

☑ This topic is **solved**.
Need further help from the community? Please
sign in and ask a **new** question.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Posted 04-19-2024 07:28 AM
(1471 views)

Trial paper address: https://www.nejm.org/doi/10.1056/NEJMoa2303062

I expected a much higher power. And it seems wrong because this code produces results that the smaller the standard deviation, the larger events it requires.

My code:

%let evTotal = 184; %let desPower = 0.80 0.90; %let hrUpper = 1.08; %let hrLower = 0.60; %let se = 0.1499; %macro P; ods pdf file = '/home/tomhsiung0/Academic/powerCalc.pdf'; proc power; coxreg hazardratio = 1.05 1.50 2.0 stddev=&se eventstotal = &evTotal power = . ; run; proc power; coxreg hazardratio = 1.05 1.50 2.0 stddev=&se eventstotal = . power = &desPower ; run; ods pdf close; %mend; %p;

Results (probably wrong):

1 ACCEPTED SOLUTION

Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

I misunderstood the meaning of the option stddev. It is not the standard error of the regression coefficient of the predictor of interest, it should be the standard deviation of the predictor itself. If, for instance, the predictor is a binary intervention arm variable (e.g., drug vs placebo), the stddev should be square root of p(1-p), with p be the proportion of an arm.

Here is the final result of this trial, the trail has a 1:1 randomization, so p is 0.5 and the square root of 0.5*(1-0.5) is 0.5 too. Much appreciate your help.

11 REPLIES 11

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

I suggest that you mention, by values, what is "wrong" and why you believe it to be wrong.

Especially to support " this code produces results that the smaller the standard deviation, the larger events it requires." Your code only shows one value of the macro variable &se so there is no support for "smaller the standard deviation".

What I do see is expected: require a larger power and the required events goes up for given standard deviation and other parameters.

Your first bit of code is telling you that if you only have 184 events then your power is very low (in the 0.05 to .2) range). The second shows just how low the sample is if you want to achieve a power of .8 or .9

A ratio near 1 (your hazard ratio of 1.05) means that the expected behavior is going to be difficult to detect differences and requires much large number of events than a ratio or 2. This is analogous to trying to tell if your coin flip is biased and trying to detect the difference between .5 and .4999 as a probability of heads.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Hi, ballardw

Thanks for your feedback. I think I had misunderstood the meaning of the parameter

stddev=&se

The value of the stddev should not be derived from the observed confidence interval, i.e., se = (log(hrUpper) - log(hrLower)/(2*1.96))

%let hrUpper = 1.08; %let hrLower = 0.60;

Instead, the values of hrUpper and hrLower are the presumed boundary of the confidence interval. My code is wrong, because I presume a significance difference, but I inputted a HR confidence across 1.0, obviously. I think the stddev parameter controls the presumed width of confidence intervals. Therefore, the narrower the width (more precise), the more sample observations are required.

Above is my present understanding of the PROC POWER with coxreg statement. I corrected my code as follow,

%macro P; %let evTotal = 184; /*Event actually occurred*/ %let hr = 0.90 0.80 0.70; /*Presumed true HRs*/ %let desPower = 0.80 0.90; /*Power goals*/ %let hrUpper = 1.00; /*Presumed upper CI boundary of HR*/ %let hrLower = 0.50; /*Presumed lower CI boundary of HR*/ %let lnHrUpper = %sysfunc(log(&hrUpper)); %let lnHrLower = %sysfunc(log(&hrLower)); %let se = %sysevalf((&lnHrupper - &lnHrLower)/2*1.96); %put The value of lnHrUpper is &lnHrUpper; %put The value of lnHrLower is &lnHrLower; %put The value of se is &se; ods pdf file = '/home/tomhsiung0/Academic/powerCalc.pdf'; proc power; coxreg hazardratio = &hr stddev=&se eventstotal = &evTotal power = . ; run; proc power; coxreg hazardratio = &hr stddev=&se eventstotal = . power = &desPower ; run; ods pdf close; %mend; %p;

With a result of,

So, we have a power of 90.8% to detect a presumed HR of 0.70, with presumed CI of 0.50-1.00 (stddev = 0.679284). And the fact is we did not get this result, so we are 90.8% sure the true HR is larger than 0.70

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Proc Power is to perform things like determine needed sample size to achieve a given power or to estimate the expected power given a sample size for given estimated parameters. Planning information.

Not for post hoc (after data collection and analysis). Your statement "So, we have a power of 90.8% to detect a presumed HR of 0.70, with presumed CI of 0.50-1.00 (stddev = 0.679284). And the fact is** we did not get this result**, so we are 90.8% sure the true HR is larger than 0.70" looks a lot like it is related to post hoc analysis.

From online documentation in the Overview section for Proc Power: (some minor emphasis added)

## Overview: POWER Procedure

Power and sample size analysis optimizes the resource usage and

design of a study, improving chances of conclusive results with maximum efficiency. The POWER procedure performs prospective power and sample size analyses for a variety of goals, such as the following:

determining the sample size required to get a significant result with adequate probability (power)

characterizing the power of a study to detect a meaningful effect

conducting what-if analyses to

assess sensitivityof the power or required sample size to other factors

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Hi, ballardw. Thank you for your feedback. I understand where you are coming from, but I need to clarify that my intention is to estimate the actual power of the study after we have the actual events observed. This helps to evaluate the risk of type II error of this study.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

The formula has a big error.

%let se = %sysevalf((&lnHrupper - &lnHrLower)/2*1.96);

This is wrong. The correct should be,

%let se = %sysevalf((&lnHrupper - &lnHrLower)/(2*1.96));

I am not 100% sure about what the parameter of stddev should be. I think it is the standard deviation of logHR which approximately has a normal distribution. Based on this understanding, I constructed the formula to calculate the value, by

(UpperCI - LowerCI)/(2 x 1.96)

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

I misunderstood the meaning of the option stddev. It is not the standard error of the regression coefficient of the predictor of interest, it should be the standard deviation of the predictor itself. If, for instance, the predictor is a binary intervention arm variable (e.g., drug vs placebo), the stddev should be square root of p(1-p), with p be the proportion of an arm.

Here is the final result of this trial, the trail has a 1:1 randomization, so p is 0.5 and the square root of 0.5*(1-0.5) is 0.5 too. Much appreciate your help.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

I am confused .

If it was binomial distribution, then variance=n*p*(1-p),

so stddev should be square root of N*p*(1-p), right ?

If it was binomial distribution, then variance=n*p*(1-p),

so stddev should be square root of N*p*(1-p), right ?

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Yes. And in your case, the maximum value of the square root is 0.50.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Sorry, it should be a Bernoulli distribution. Each observation in this case either receives the drug or not. Therefore, the random variable is either 1 or 0. The random variable of a binomial distribution can have numbers other than 0 or 1. It should have been Bernoulli distribution.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

I know it is binomial distribution.

I said it should be square root of N*p*(1-p).

You didn't consider sample size N in your formula .

You said "the stddev should be square root of p(1-p)", there are not N in it .

I said it should be square root of N*p*(1-p).

You didn't consider sample size N in your formula .

You said "the stddev should be square root of p(1-p)", there are not N in it .

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Hi, Ksharp

Thanks for your feedback. However, I don't see it that way. The stddev is the standard deviation of the predictor and the predictor can only be 1 or 0, given the predictor represents the trial with only 2 arms (e.g., drug vs placebo). A predictor of binomial distribution does not have to be only either 1 or 0, it can be any integer. Therefore, the distribution of the predictor is a Bernoulli distribution, not a binomial distribution. My first post made the mistake and it should have been Bernoulli distribution.

Regards,

**SAS Innovate 2025** is scheduled for May 6-9 in Orlando, FL. Sign up to be **first to learn** about the agenda and registration!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.