I have a simple gradient boosting model (maximum branch = 2 , maximum depth = 1 {Adaboost} ) in e-miner(v 14.1) with binary target and mostly interval inputs(~500 variables). I will be choosing variables if the variable importance > 0.05 for both training and validation datasets. However, I am trying to understand the mathematics behind how the "variable importance" is calculated. I read the documentation (decision tree variable importance ) but its very vague. I was wondering if anyone could shed light on how it is calculated with a simple example? It will be very helpful.
feature importance for a single decision tree - the amount that each attribute split point improves the performance measure, weighted by the number of observations the node is responsible for. The performance measure may be the purity (Gini index) used to select the split points, or another more specific error function.
overall feature importance - feature importances averaged across all of the the decision trees within the model.
I am attaching the screenshot from SAS Enterprise miner Reference documentation 14.3 where you can find the official computation description.
Good news: We've extended SAS Hackathon registration until Sept. 12, so you still have time to be part of our biggest event yet – our five-year anniversary!
Use this tutorial as a handy guide to weigh the pros and cons of these commonly used machine learning algorithms.
Find more tutorials on the SAS Users YouTube channel.