BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
palolix
Obsidian | Level 7

In SAS for Mixed Models it says:  "Although the A*B interaction is not significant, you may still want to look at various differences among specific A*B means". I used the lsmestimates statement for that although the interactions are not significant, getting some significant lsmestimates.  Is it appropriate to use this information (the significant p-values from the lsmestimates) when the interactions are not significant? (I used the SIMULATE adjustment)

I would greatly appreciate some comment.

Thank you!

Caroline

1 ACCEPTED SOLUTION

Accepted Solutions
SteveDenham
Jade | Level 19

Hi Caroline,

I'm not so sanguine about unadjusted values as were Ramsey & Schafer.  The philosophy seems to be that if you preplan, you get off scot-free.  Well, what happens when you have a big multifactor study, and the preplanned comparisons are 4 or more at each of 60 timepoints.  I think you have a multiplicity problem that you cannot get around--and if it exists there, it exists if you have 2 preplanned comparisons.  I also think it is one of the main causes for lack of ability to replicate "significant" results.  I think you are much better off with the adjusted p-values.

Steve Denham

View solution in original post

7 REPLIES 7
SteveDenham
Jade | Level 19

This is one of those conundrums that people write dissertations on in mathematical statistics.  The conventional wisdom is that if you prespecify the comparisons, you don't need to look at the significance of the interaction effect--you know ahead of time what you want to compare.  However, if everything is done post hoc, and you are just looking for anything that might be a difference, then you should use the interaction significance test as a "gatekeeper" before doing any comparisons of means.

My feeling is that you wouldn't have designed an experiment without wanting to look at some specific comparisons, and once those are prespecified (and adjusted for), you shouldn't have to worry about whether the interaction is "significant" or not.

Steve Denham

palolix
Obsidian | Level 7

Thank you so much Steve for your wonderful comment!! Is a clever advice and will help me a lot in the future.

What do you mean with using the interaction significance test as a "gatekeeper" before doing comparisons (if that would be the case)?

Thank you Steve!

Caroline

SteveDenham
Jade | Level 19

Well, let's suppose you have an observational study (rather than a designed experiment), and you have males and females, and say four or five different drugs of interest.  In something like this, I would check to see if the drug by gender interaction was significant, say at the alpha=0.05 level, before I did any comparisons.  If it was significant, then I would do my comparisons within gender, if not, then the comparisons would be between the drugs using the marginal means over gender.

Steve Denham

palolix
Obsidian | Level 7

I understand....good example. My case is a designed experiment.  Now I can sleep in peace Smiley Wink

Thanks a lot Steve for your great help!!

Caroline

palolix
Obsidian | Level 7

Dear Steve,

As a complement of what you also said as comments to my question;

In J. Amer. Soc. Hort. Sci. 131(2):201-208. 2006 I found this:  "Researchers should construct hypothesis of interest and test these hypotheses whether or not the tests are automatically provided in an omnibus F Test or with the LSMEANS and the PDIFF option"........"Contrasts are useful statistical tools because treatment differences or interactions could be confirmed significant with contrasts when an analysis of variance suggests there are no treatment differences or significant interactions (Marini, 2003)".

"Ramsey and Schafer (2002) suggest that preplanned tests should be conducted without adjusting probability values regardless of the statistical significance of the F test and that probability values for post hoc or unplanned comparisons should be adjusted".

I adjusted my lsmestimates with SIMULATE, now according to these guys, do you think I should better use the unadjusted p-values?

Thank you Steve!

Caroline

SteveDenham
Jade | Level 19

Hi Caroline,

I'm not so sanguine about unadjusted values as were Ramsey & Schafer.  The philosophy seems to be that if you preplan, you get off scot-free.  Well, what happens when you have a big multifactor study, and the preplanned comparisons are 4 or more at each of 60 timepoints.  I think you have a multiplicity problem that you cannot get around--and if it exists there, it exists if you have 2 preplanned comparisons.  I also think it is one of the main causes for lack of ability to replicate "significant" results.  I think you are much better off with the adjusted p-values.

Steve Denham

palolix
Obsidian | Level 7

Ok, so I better use the adjusted p-values.  Thank you very much for your support Steve!

Caroline

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 7 replies
  • 2103 views
  • 1 like
  • 2 in conversation