turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- significant lsmestimates for no significant intera...

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-09-2015 09:39 AM

In SAS for Mixed Models it says: "Although the A*B interaction is not significant, you may still want to look at various differences among specific A*B means". I used the lsmestimates statement for that although the interactions are not significant, getting some significant lsmestimates. Is it appropriate to use this information (the significant p-values from the lsmestimates) when the interactions are not significant? (I used the SIMULATE adjustment)

I would greatly appreciate some comment.

Thank you!

Caroline

Accepted Solutions

Solution

04-15-2015
03:03 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-15-2015 03:03 PM

Hi Caroline,

I'm not so sanguine about unadjusted values as were Ramsey & Schafer. The philosophy seems to be that if you preplan, you get off scot-free. Well, what happens when you have a big multifactor study, and the preplanned comparisons are 4 or more at each of 60 timepoints. I think you have a multiplicity problem that you cannot get around--and if it exists there, it exists if you have 2 preplanned comparisons. I also think it is one of the main causes for lack of ability to replicate "significant" results. I think you are much better off with the adjusted p-values.

Steve Denham

All Replies

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-10-2015 01:24 PM

This is one of those conundrums that people write dissertations on in mathematical statistics. The conventional wisdom is that if you prespecify the comparisons, you don't need to look at the significance of the interaction effect--you know ahead of time what you want to compare. However, if everything is done post hoc, and you are just looking for anything that might be a difference, then you should use the interaction significance test as a "gatekeeper" before doing any comparisons of means.

My feeling is that you wouldn't have designed an experiment without wanting to look at some specific comparisons, and once those are prespecified (and adjusted for), you shouldn't have to worry about whether the interaction is "significant" or not.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-10-2015 02:15 PM

Thank you so much Steve for your wonderful comment!! Is a clever advice and will help me a lot in the future.

What do you mean with using the interaction significance test as a "gatekeeper" before doing comparisons (if that would be the case)?

Thank you Steve!

Caroline

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-10-2015 02:25 PM

Well, let's suppose you have an observational study (rather than a designed experiment), and you have males and females, and say four or five different drugs of interest. In something like this, I would check to see if the drug by gender interaction was significant, say at the alpha=0.05 level, before I did any comparisons. If it was significant, then I would do my comparisons within gender, if not, then the comparisons would be between the drugs using the marginal means over gender.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-10-2015 02:41 PM

I understand....good example. My case is a designed experiment. Now I can sleep in peace

Thanks a lot Steve for your great help!!

Caroline

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-14-2015 01:06 PM

Dear Steve,

As a complement of what you also said as comments to my question;

In J. Amer. Soc. Hort. Sci. 131(2):201-208. 2006 I found this: "Researchers should construct hypothesis of interest and test these hypotheses whether or not the tests are automatically provided in an omnibus F Test or with the LSMEANS and the PDIFF option"........"Contrasts are useful statistical tools because treatment differences or interactions could be confirmed significant with contrasts when an analysis of variance suggests there are no treatment differences or significant interactions (Marini, 2003)".

"Ramsey and Schafer (2002) suggest that preplanned tests should be conducted without adjusting probability values regardless of the statistical significance of the F test and that probability values for post hoc or unplanned comparisons should be adjusted".

I adjusted my lsmestimates with SIMULATE, now according to these guys, do you think I should better use the unadjusted p-values?

Thank you Steve!

Caroline

Solution

04-15-2015
03:03 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-15-2015 03:03 PM

Hi Caroline,

I'm not so sanguine about unadjusted values as were Ramsey & Schafer. The philosophy seems to be that if you preplan, you get off scot-free. Well, what happens when you have a big multifactor study, and the preplanned comparisons are 4 or more at each of 60 timepoints. I think you have a multiplicity problem that you cannot get around--and if it exists there, it exists if you have 2 preplanned comparisons. I also think it is one of the main causes for lack of ability to replicate "significant" results. I think you are much better off with the adjusted p-values.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-16-2015 12:50 PM

Ok, so I better use the adjusted p-values. Thank you very much for your support Steve!

Caroline