10-29-2013 10:10 AM
I am planning to rebuild a Logistic Regression Model. In the past I have always built a Response model on data with no duplicates. One row per customer.
The current data I have now is as below. Basically we have different call_date per customer. Can we really build a Logistic Regression model on data with duplicates or shall I only take the max call date?? The model I have to rebuild was run on the duplicated data like below...Your help will be much appreciated! Many Thanks
10-29-2013 11:52 AM
In general, the independence assumption is violated by having the repeated measures in the data. Some suggestions:
First, rebuild the 'wrong' (the one with all the repeats) model to make sure you can reproduce it.
Do a sensitivity analysis. Sometimes, the "wrong" model is still useful.
-- do a model based on the first call.
-- do a model based on the last call.
-- compare them to the original model.
If they all tell the same story, then use the first or last (whichever makes more sense in the analysis context).
If they tell different stories, then you have got more work to do to understand what is going on.
Another possibility for the sensitivity analysis is to use some sort of summary record for each customer. It may be that things like number of calls are important markers.
Good luck with your modelling.
10-29-2013 12:14 PM
You can also consider collapsing the data to a customer level, and then having specific metrics attached to that customer, eg. Number of calls, number of sales, number of purchases
I don't know if you could use a repeated/random statement in GLIMMIX model instead as well, with a binary response. Random thought would need looking into.