Hello SAS board,
I am using the code below to analyse our 3x2x2 within-subject experiment. Everything works great. The question is whether Standard Error from the " Differences of Least Squares Means" table refers to standard error of mean of the difference? For example, if I am looking at the comparison :
Visibility (level 1) Globality (level 1) Congruency (level 1) vs. Visibility (level 1) Globality (level 1) Congruency (Level 2)
the t value in the table comes from the (paired sample) t test for these two conditions? And then the SE is calculated for the difference between the two conditions?
PROC mixed ;
CLASS visibility sub globality congruency ;
MODEL rt= visibility|globality|congruency ;
Repeated /subject=sub type=cs;
LSMeans globality|congruency|visibility/diff cl;
run; quit;
I need to know this, because I have to plot figures with error bars representing standard error of mean corrected for the within-subject design.
I will greatly appreciate your feedback and time!
Dina.
In a mixed design, the standard error (SE) of a mean will include all sources of variance (block, whole plot, subplot, etc.). Consequently figures depicting the mean and SE may appear to be inconsistent with results from statistical analysis, which control for variance sources.
The difference between two means will "be adjusted for" some sources of variance, depending on which two means are compared. The classic text by Cochran and Cox (Experimental Design, 2nd ed) describes the mathematics nicely (I think in the chapter on split-plot designs, but I don't have my copy at hand to check); I'm sure the same information is available in other design texts. That's why you see different SEs for different pairwise comparisons.
You may be confusing "pairwise" mean comparisons with "paired t-test", which is the point that @StatsMan and @PaigeMiller are making. A pairwise comparison of means can be done in conjunction with any experimental design. A paired t-test corresponds to a specific experimental design. Not the same thing.
I hope this helps.
The SE's in the LS-means table are the standard errors for the individual ls-means. The SE's in the Differences of LS-means table are the standard errors for the differences of the ls-means.
The p-values in the Differences table are the p-values for the test of the difference in each pair of ls-means reported. Nothing is paired, just comparison of means.
Nothing is paired in this analysis.
In a mixed design, the standard error (SE) of a mean will include all sources of variance (block, whole plot, subplot, etc.). Consequently figures depicting the mean and SE may appear to be inconsistent with results from statistical analysis, which control for variance sources.
The difference between two means will "be adjusted for" some sources of variance, depending on which two means are compared. The classic text by Cochran and Cox (Experimental Design, 2nd ed) describes the mathematics nicely (I think in the chapter on split-plot designs, but I don't have my copy at hand to check); I'm sure the same information is available in other design texts. That's why you see different SEs for different pairwise comparisons.
You may be confusing "pairwise" mean comparisons with "paired t-test", which is the point that @StatsMan and @PaigeMiller are making. A pairwise comparison of means can be done in conjunction with any experimental design. A paired t-test corresponds to a specific experimental design. Not the same thing.
I hope this helps.
I want to thank everyone who contributed to the discussion!
As we have a within-subject design, I thought it was the case of paired t-test.
Many thanks for the clarification,
Dina.
A paired t-test is a within-subjects design, so you are right on that point. But it has only one fixed effects factor with two levels and so only one comparison. Your scenario is more complicated, and the stat model cannot be a paired t-test.
Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 25. Read more here about why you should contribute and what is in it for you!
ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.
Find more tutorials on the SAS Users YouTube channel.