At and above a specific cutoff value, sensitivity is the percentage of those with the outcome of interest that are detected using your logistic model: the percentage of customers who are more likely to respond. Sensitivity is the percentage of those without the outcome of interest that are detected using the model: the percentage of customers who are not likely to response. The percentage of false positives is the percentage of customers that your model predicts as more likely to respond who in fact do not respond. The percentage of false negatives is the percentage of customers that your model predicts as not likely to respond who do in fact respond. In your example at a cutoff of 0.20 or more, your model picks up only 16.9% [=sensitivity] of customers who are more likely to respond, and 3.7% [100% - 96.3% (=specificity)] of customers who are not likely to respond. However, 75.3% [=% of false positives] of those your model predicts as likely to respond will in fact not respond, though 94.2% [=100% - 5.8% (% of false negatives)] of those your model predicts as not likely to respond will in fact not respond. You can visualize this better in a two-by-two table like the following: Test prediction % likely to respond % not likely to respond Total 0.20 or more 9,516 29,047 38,563 < 0.20 46,745 753,000 799,745 Total 56,261 782,047 838,308 Sensitivity = 9,516 / 56,261 = 16.9% Specificity = 753,000 / 782,047 = 96.3% False positives = 29,047 / 38,563 = 75.3% False negatives = 46,745 / 799,745 = 5.8% To pick an appropriate test prediction cutoff, you have to balance the costs vs. the benefits. Using the % false positives as one criterion, of every four customers you tried to contact, on average only one of them would be likely to respond. If contacting custormers is relatively cheap, you might not worry so much about this false positive % and prefer to increase the percentage of those likely to respond who in fact do respond (that is, to increase the sensitivity). At a test prediction cutoff of 0.05 or above, only one of nine customers you tried to contact would be likely to respond [=100%-89% false positives] but you would in fact be able to detect more than three-quarters of those customers that would likely respond [sensitivity=77.9%].
... View more