turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- Effect selection problem in proc NLMIXED

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-10-2012 10:06 AM

Hi folks;

I am working with 9.3 version. I would like to do some sorts of effect selection in nlmixed since it does not have such tools. I encountered something strange in doing so. In one step, when I enter one effect, it seriously decreases the -2loglikelihood but the P-Value of that effect turns out to be insignificant. Seeing below, I want to know, out of "admsource" and "pheartind", which is better to enter the model:

For "admsource"

Fit Statistics

-2 Log Likelihood | 3038.0 |

AIC (smaller is better) | 3044.0 |

AICC (smaller is better) | 3044.0 |

BIC (smaller is better) | 3061.4 |

Parameter Est. StdE DF tValue Pr > |t| Alpha Lower Upper Gradient

b0 | -4.8324 | 0.3791 | 2448 | -12.75 | <.0001 | 0.05 | -5.5758 | -4.0891 | 2.64E-6 |
---|---|---|---|---|---|---|---|---|---|

badmsource | 1.4199 | 0.3591 | 2448 | 3.95 | <.0001 | 0.05 | 0.7158 | 2.1240 | 5.435E-6 |

sd | 0.7893 | 0.1121 | 2448 | 7.04 | <.0001 | 0.05 | 0.5696 | 1.0090 | 0.000026 |

and for "pheartind"

Fit Statistics

-2 Log Likelihood | 2674.6 |

AIC (smaller is better) | 2680.6 |

AICC (smaller is better) | 2680.6 |

BIC (smaller is better) | 2697.4 |

Parameter Est. StdE DF tValue Pr > |t| Alpha Lower Upper Gradient

b0 | -3.1469 | 0.1818 | 2025 | -17.31 | <.0001 | 0.05 | -3.5034 | -2.7903 | 0.000044 |
---|---|---|---|---|---|---|---|---|---|

bpheartind | -0.2095 | 0.1641 | 2025 | -1.28 | 0.2020 | 0.05 | -0.5314 | 0.1124 | 0.000016 |

sd | 0.6711 | 0.1081 | 2025 | 6.21 | <.0001 | 0.05 | 0.4591 | 0.8832 | 0.00004 |

Although "pheartind" can reduce much bigger amount in -2loglikelihood but its parameter turns out to be insignificant. So which effect should be included in the model?

Thanks!

Issac

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to issac

09-11-2012 07:42 AM

Hi Issac,

What is the value of -2 log likelihood for the null model in each case? I have a sinking feeling that the data are not the same, so just comparing the log likelihoods of these two models is not the way to proceed. My assumption is based on the difference in degrees of freedom for the tow models. For bpheartind it is 2025, for badmsource it is 2448. With so much more data for the badmsource, it is not surprising that the value of -2 log likelihood is substantially larger.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

09-11-2012 08:42 AM

Hi Steve;

For the null model containing only b0 and e (random effect), I have the followings:

Fit Statistics

-2 Log Likelihood | 3082.5 |

AIC (smaller is better) | 3086.5 |

AICC (smaller is better) | 3086.5 |

BIC (smaller is better) | 3098.1 |

Par. Est. StdE DF tValue Pr > |t| Alpha Lower Upper Gradient

b0 | -3.3879 | 0.09136 | 2448 | -37.08 | <.0001 | 0.05 | -3.5671 | -3.2088 | 0.000033 |
---|---|---|---|---|---|---|---|---|---|

sd | 0.6490 | 0.1055 | 2448 | 6.15 | <.0001 | 0.05 | 0.4421 | 0.8559 | 0.000028 |

sd is the standard deviation of e, since I have : RANDOM e~NORMAL(0,sd*sd) SUBJECT=id;

This differences in DF happened, I think, because 'pheartind' has 551 missing values. So how can I proceed in choosing the best effect to be included at each step?

Really appreciate!

Issac

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to issac

09-17-2012 12:52 PM

This is like trying to compare apples and oranges. One thing you might try is to subsample your data so that you have complete data for bpheartind and badmsource, then fit the models and look at the information criteria. I would repeat this several (say 500 subsamples) and see which of the two more often gives the smaller IC.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

09-19-2012 02:11 PM

Steve;

Thanks for your hint. Kind of thinking that it's better to get more data points, cause the missing values are also existed in other variables in my data, leading to some instability of the model. BTW, that's really helpful point to consider.

Issac