06-17-2018 05:18 PM
06-17-2018 06:11 PM
Compare the AIC for the two fits in the model fit statistics table. The model with the lower AIC is better.
06-18-2018 07:59 AM
06-18-2018 08:41 AM
Thanks for the help! I didn't realize I needed two separate models. Now if I then wanted to determine if M1 and M2 are independently associated with the outcome, would I construct a new model containing both in order to test for independence?
The way you have worded the question, I don't think this is a question that can be answered by fitting a model that has both variables.
If M1 and M2 have a correlation of zero, they have independent effects on the outcome. If they have a correlation that is not zero, then the effect of M1 and M2 will be correlated and not independent.
06-18-2018 11:08 AM
Okay I think I understand that thank you! If its not too much trouble, could you clarify why 2 separate models is better for my initial aim of comparing predictive validity of M1 and M2 on the outcome? Would fitting a model with both M1 and M2 allow for potential bias?
06-17-2018 07:53 PM
For logistic regression make sure to look at the confusion matrix and the AUC as well.
Hello,I am looking to assess the predictive validity of two measures (M1 and M2) on a binary outcome. After constructing my model, do I just calculate odds ratios and compare the odds ratios for the two measures? I am not looking to assess the predictive value of the whole model just 2 specific predictors. Thanks for any clarification!
06-17-2018 09:33 PM
a sas macro is available for pencina's net reclassification index: https://analytics.ncsu.edu/sesug/2010/SDA07.Kennedy.pdf
pencina's method would be relevant for the scenario you describe ie 'not the whole model' just the added variables: "Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond" https://www.ncbi.nlm.nih.gov/pubmed/17569110.
I'm going to write a brief blog post about about it's implementation when i find time....