turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- If the interaction is not significant but the mode...

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-18-2014 04:11 PM

Thanks,

Marcio

Accepted Solutions

Solution

07-19-2014
10:56 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-19-2014 10:56 PM

It might be very helpful to look at the distribution of residuals from your models. It might be that you are comparing two ill-fitting models or that some outliers are wrecking havoc in the fit stats.

PG

PG

All Replies

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-18-2014 04:22 PM

Normally I should think you would test for interactions first. If there are none i.e. not significant, you can assume that those terms are not important that is they do not affect the variance in your data and run models that don't have an interaction term. Does that make sense? That is my understanding.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-18-2014 06:41 PM

Thanks Kodmfl, but how does the AIC/BIC plays on it?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-19-2014 08:26 AM

It is the measure of model fitting. the lower value stand for your model is more good . as the matter of fact , I don't thing you should keep the interaction effect in the model .

Xia Keshan

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-19-2014 11:52 AM

These are goodness of fit measure which are used to compare one model to another.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-19-2014 08:56 PM

So Just to make sure we are on the same page - I had a model where the interaction was not significant, but the AIC/BIC were lower in that model compared to when I took the interaction out. Based on what you all said I should take the interaction out - even though the fit is better?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-20-2014 12:25 PM

I don't think there is a direct relation between AIC and significance of interaction term. If you add more predictors in the model, even some of them are irrelevant to the study you will get lower AIC.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-21-2014 09:38 AM

I think the comment by stat@sas is a bit misleading. If you add variables to a model, the -2 log-likelihood (-2LL) will go down. But the AIC may or may not go down. The AIC adds a penalty for the number of parameters in a model. If the addition of a variable (i.e., parameter) does not reduce the -2LL enough, then the AIC will go up. That is why the AIC is a useful measure of goodness of fit penalized by the number of parameters (it prevents drastic overfitting of a model). It is not exactly a hypothesis test, however. In programs like MIXED, GLIMMIX, etc., a scaled Wald statistic (t or F) is used to test hypotheses. This does not use the -2LL directly. The Wald test can lead to similar conclusions to that obtained by looking at AIC, but not always. There are tests possible based directly on the -2LL (likelihood ratio tests). But even these do not not exactly correspond to the AIC. You have a case where there is a disagreement. It is well known that model selection based on the AIC may lead to an overly complex model. You will have to decide on whether the interaction matters.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-21-2014 11:43 AM

Thanks for the detailed explaination on AIC. As you mentioned adding a new variable in the model will reduce -2 log-likelihood. So if reduction in -2 log-likelihood by adding a new variable is higer than 2P then AIC will be on lower side right?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-21-2014 02:36 PM

With the addition of one parameter (e.g., one variable in a linear model), -2LL has to decrease by at least 2 for there to be a decrease in AIC.

Solution

07-19-2014
10:56 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-19-2014 10:56 PM

It might be very helpful to look at the distribution of residuals from your models. It might be that you are comparing two ill-fitting models or that some outliers are wrecking havoc in the fit stats.

PG

PG

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-21-2014 10:42 AM

Mario

Can you tell me a bit more about the design of your experiment? What are the data? What questions are you asking i.e. what hypotheses are you trying to test with your statistical models? It would seem to me that no matter what, you should attempt to develop the most parsimonious model possible and in that case you will probably want to drop the interaction term and rerun the analysis without the interaction if it is not significant regardless of the AIC/BIC values. Another thing to consider is not just the statistical significance but the reality of your results. If you are going to keep a particular effect in your model then you should be able to demonstrate that it exists somehow, One COULD argue, I suppose for retaining a particular term in a model (an interaction for example) at P>0.05 (but close to 0.05) if you reasonable evidence for doing so. But I don't think that better fit of your model is a good enough reason. In fact typically in my experience a statistician/scientist feels damn lucky to get rid of interactions if he can because they can be damn hard to try and explain from a theoretical point of view. That may not always be the case and interesting things CAN be discovered BECAUSE of interactions but not if they are not really there so to speak.

As for AIC and BIC they are criterion based model selection approaches that are typically used for time series and multiple regression analysis, I have not used them extensively. My advice is to understand your data as intimately as you can before allowing a statistical criterion to make choices for you.

If you are doing ANOVAs do means plots. If you are doing regressions do a 2-D or 3-D contour plot.

Hope this helps, Best of luck.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-21-2014 09:15 PM

I appreciate all the input you gave me! That will help me deciding on the model.

Kodml - in that example the variable definitely has biological meaning.

Thanks!

Marcio

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-22-2014 11:46 AM

Excluding based on a p value is always dangerous. What would you be asking if one of the main effects was not significant, but the interaction was significant? Which makes the results more interpretable?

And, what happens to corrected AIC? That has been shown to be a better measure of model fit for smaller datasets.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

07-31-2014 10:15 PM

Steve, thanks I will check that!

Marcio