turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- proc mixed /diff; Differences of Least Squares Me...

Topic Options

- RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

02-01-2018 04:18 AM - edited 02-01-2018 04:19 AM

Hello SAS board,

I am using the code below to analyse our 3x2x2 within-subject experiment. Everything works great. The question is whether Standard Error from the " Differences of Least Squares Means" table refers to standard error of mean of the difference? For example, if I am looking at the comparison :

Visibility (level 1) Globality (level 1) Congruency (level **1**) *vs*. Visibility (level 1) Globality (level 1) Congruency (Level **2**)

the t value in the table comes from the (paired sample) t test for these two conditions? And then the SE is calculated for the difference between the two conditions?

```
PROC mixed ;
CLASS visibility sub globality congruency ;
MODEL rt= visibility|globality|congruency ;
Repeated /subject=sub type=cs;
LSMeans globality|congruency|visibility/diff cl;
run; quit;
```

I need to know this, because I have to plot figures with error bars representing standard error of mean corrected for the within-subject design.

I will greatly appreciate your feedback and time!

Dina.

Accepted Solutions

Solution

02-05-2018
02:19 AM

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to dina_d

02-04-2018 02:16 PM

In a mixed design, the standard error (SE) of a mean will include *all* sources of variance (block, whole plot, subplot, etc.). Consequently figures depicting the mean and SE may appear to be inconsistent with results from statistical analysis, which control for variance sources.

The difference between two means will "be adjusted for" *some* sources of variance, depending on which two means are compared. The classic text by Cochran and Cox (Experimental Design, 2nd ed) describes the mathematics nicely (I think in the chapter on split-plot designs, but I don't have my copy at hand to check); I'm sure the same information is available in other design texts. That's why you see different SEs for different pairwise comparisons.

You may be confusing "**pair**wise" mean comparisons with "**pair**ed t-test", which is the point that @StatsMan and @PaigeMiller are making. A pairwise comparison of means can be done in conjunction with any experimental design. A paired t-test corresponds to a specific experimental design. Not the same thing.

I hope this helps.

All Replies

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to dina_d

02-01-2018 10:39 AM

The SE's in the LS-means table are the standard errors for the individual ls-means. The SE's in the Differences of LS-means table are the standard errors for the differences of the ls-means.

The p-values in the Differences table are the p-values for the test of the difference in each pair of ls-means reported. Nothing is paired, just comparison of means.

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to StatsMan

02-04-2018 08:11 AM

Thank you for the reply!

Just to make sure I got this. Are the t tests from the " Differences of

Least Squares Means" table paired t tests?

Many thanks,

Dina.

Just to make sure I got this. Are the t tests from the " Differences of

Least Squares Means" table paired t tests?

Many thanks,

Dina.

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to dina_d

02-04-2018 08:43 AM

Nothing is paired in this analysis.

--

Paige Miller

Paige Miller

Solution

02-05-2018
02:19 AM

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to dina_d

02-04-2018 02:16 PM

In a mixed design, the standard error (SE) of a mean will include *all* sources of variance (block, whole plot, subplot, etc.). Consequently figures depicting the mean and SE may appear to be inconsistent with results from statistical analysis, which control for variance sources.

The difference between two means will "be adjusted for" *some* sources of variance, depending on which two means are compared. The classic text by Cochran and Cox (Experimental Design, 2nd ed) describes the mathematics nicely (I think in the chapter on split-plot designs, but I don't have my copy at hand to check); I'm sure the same information is available in other design texts. That's why you see different SEs for different pairwise comparisons.

You may be confusing "**pair**wise" mean comparisons with "**pair**ed t-test", which is the point that @StatsMan and @PaigeMiller are making. A pairwise comparison of means can be done in conjunction with any experimental design. A paired t-test corresponds to a specific experimental design. Not the same thing.

I hope this helps.

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

02-05-2018 02:27 AM

I want to thank everyone who contributed to the discussion!

As we have a within-subject design, I thought it was the case of paired t-test.

Many thanks for the clarification,

Dina.

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to dina_d

02-06-2018 12:16 PM

A paired t-test is a within-subjects design, so you are right on that point. But it has only one fixed effects factor with two levels and so only one comparison. Your scenario is more complicated, and the stat model cannot be a paired t-test.