06-30-2016 03:32 PM - edited 06-30-2016 05:46 PM
Does anybody know how to use PROC MIXED to calculate the confidence intervales of standardized effect size (for example, Cohen's d)? Using PROC MIXED, we can get the least square means of the mean differences (the mean of different scores between pre- and post-treatment), and corresponding CIs for the lsmeans. But we may not get the CIs for the standardized effect size (say, (M2- M1)/SDdiff by simply dividing the CIs for the mean difference using SDdiff (i.e., CIs for mean difference/SDdiff). Is there anyway we can get "ncp" (noncentrality parameter) and then calculate CIs for the standardized effect size?
We had repeated measures on subject (crossover design) and different time points (measured at hour 1 and hour 2).
06-30-2016 04:15 PM
Be careful. Cohen's standardized difference (d) is based on
where S is the standard deviation of the population, not the standard deviation of the difference. WIth separate variances for each group, some use the standard deviation of the control population for S. If you are using MIXED with no random effect term, then S is just the square root of the residual variance. One needs the variance of d for confidence intervals, which is complex. But a very good approximation is
(n1+n2)/(n1*n2) + (d^2)/[2*(n1+n2)]
where n1 and n2 are the sample sizes for groups 1 and 2. A cruder approximation, except when d is large, is
Note: if n1=n2 = n (same sample size for each group), then the latter reduces to 2/n.
One could use the square-root of this as the standard error of d, so a confidence interval is d+/- t*SE, where t is the student t critical value. This can all be found in chapter 4 of the great book by Borenstein et al. (Introduction to Meta-Analysis).
06-30-2016 05:26 PM
Thanks, @lvm for the detailed and very helpful discussion. I understand that the "classic" Cohen's d uses "pooled" standard deviation. But there are a couple of papers also mentioned using standard deviation of differences to standardize the effect size. And we are cautioned that using SDdiff may artificially inflate the effect size.
In our study, we used crossover design therefore for each subject, we have both pre- and post-treatment data and the outcomes are paired. We also have repeated measures on both subject and time points (each subject was measured at 1 and 2 hour time point). So, we have to use PROC MIXED to account for the repeated measures.
Do you think that we still can use the CI approximation method you suggested to calculate CIs for standardized effect size (another form of Cohen's d)?
06-30-2016 05:42 PM
You can use many possible standard deviations to standardize a statistic. However, the estimated variance for another type of standardized difference would be different from the one I showed you. Also, because the SD of a diffrence will be larger than the pooled SD, the magnitude of d will be different, and thus its interpretation would be different (in terms of "large d", "small d" , and so on). I have not looked into the literature on this. Also, I have not considered calculating d with repeated measures and multiple random effects. I am sure that others have done this. It would be tricky.
Personally, I would much rather work with differences of means rather than standardized differences. Theory is straightforward to define the variance of the difference for any design, including with repeated measures,cross-over, etc., based on the results from a mixed model. But you may have good reasons for a standardized difference, so I'll let others comment.
07-01-2016 01:11 PM
Journals that require submission in APA style are highly likely to require standardized effect sizes.
Even when they make no sense whatsoever, given the design (repeated measures in a hierarchical design, where first level units are meant to be a sample from a broad inference space). Any pooled estimate of variabitlity on the same scale as the means that doesn't accommodate the correlation structure is meaningless.
Oh, wait, that was all just my opinion.