12-20-2016 07:11 PM - edited 12-20-2016 07:12 PM
I am trying to build a unique type of model scoring/comparison. Right now I am testing different models manually and I would like to automatically select which model performs best according to custom criteria.
Basically, I am building a dynamic pricing model, where 25% of users get one of A, B, C, or D prices. I then build one model for each price. So a model is run for everyone who got price A, and B, and so forth.
So then I have 4 models.
I then score all 4 models on my validation or test data and see which predicts the highest amount (response variable is revenue). Whichever model predicts the highest amount is my ending price segment. So if model A predicts the highest revenue for a customer, that customer is predicted to receive price A.
My model revenue is then the mean total revenue of all those customers who received the same price they were predicted to receive (or IOW all the customers who received the same price as the model that predicted the highest revenue)
I want to test different models to see which one predicts the highest model revenue. I can do it with sas code nodes manually, but I want to know if there is a way to do it automatically and select the highest models automatically.
I have learned that the start/end groups functionality wont work for my purposes since I cant ouput scores from each group with that functionality.
Any ideas? Thanks
Here is some help to show what I am doing:
Diagram of an example process (basically to show I am scoring the models separately. Each model is from a different subset of the data)
12-21-2016 11:54 AM
From what I understand the ensemble node can take the average or maximum of different models from the same training data, correct? My goal is a bit different since I am trying to take the maximum of different sets of training data