sensitivity based pruning is another way to 'interpret' the model, similar to the concept of 'partial dependence plot'. By replacing all data's variable Xi's value to be a (or several) selected value(s), and applying score code to the new data, we see whether variable Xi is important enough for the target value prediction
If there's redundant input, Xii, the score code would use Xii and make Xi looks important (overvalue). even though its a duplicated variable and should be filtered out in variable selection procedure.
If there's interaction between Xi and Xii, Xi change Xii doesn't change kind of distort the original data's relationship, which would make Xi looks not that important (undervalue).
I personally prefer use this method in the partial dependence plot department, for interpreting models instead of selecting inputs. Nowadays the data is often too complicated, having too much variables and not following specific distribution, to know interaction or redundant input ahead,
Aurora Peddycord-Liu