Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Data Mining
- /
- Interval Targets from Gradient Boosting

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-21-2017 04:46 PM

Hi,

I was wondering if there is a way to force the gradient boosting node to always produce a positive predictions for an interval target. The target is not negative for the training data, and it could never be negative realisitcally.

I understand why the model predicts negative values, and I have tried doing a log transformation and exponentiating the predictions. That did not work and the resulting model was poor. I can elaborate more if needed.

Thank you,

James

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-26-2017 11:05 AM

In general, boosting produces predictions that are out of range because the model predicts those observations poorly (was that too obvious?). Better predictions sometimes result from increasing the number of trees while decreasing the learning rate (the SHRINKAGE= parameter). Other than that, no, there is no option within the boosting algorithm. The predictions would have to be post-processed, perhaps simply by truncating negative predictions to 0.

-Padraic

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-26-2017 04:42 PM - edited 04-26-2017 04:42 PM

I don't believe the model is predicting the values (negatives) poorly as the observations' actual values are usually 0 or near 0.

The negative predictions seem to be the result of the gradient boosting node's underlying algorithm. Since the boosting node constructs an additive regression model by sequentially fitting a base-learner to current pseudo residuals at each iteration, the final model is linear. This explains why the negatives are occurring despite there being no actual negative values.

I believe that performing a log transformation prior to modeling (like I tried) is not possible being that the the final linear model is based on the pseudo-residuals of the base-learning trees. I guess I was looking for further explanation on this, and if another transformation may work to force positive predictions (I don't think so).

Last resort, would be to truncate negatives to 0, but that seems like it may be the only option.

Thank you.