I would like to transform a categorically-valued predictor variable into a continuously-valued predictor variable. From, say, character class values into real-valued representations of those values. I know that I can do this in several ways: simply by substituting the frequency of a level for the level value itself or by computing the entropy of a level.
I want to generalize the interpretation of the Information Value of a variable from the binary classification "good/bad" application frequently used in credit scoring to a multiclass 1-versus-many representation of the 1-of-N GLM encoding. For example, if there are 3 class values, I would compute the information value of each in turn versus the other two so that, for class labels 'A', 'B', 'C', the three information values would be 'A' vs ('B', 'C'), 'B' vs ('A', 'C') and 'C' vs ('A', 'B') so that I can numerically represent a multiclass categorical variable as a single real-valued variable. I know that there will be only N distinct values produced by this technique, but I will be able to use existing code that works well on continuous-valued variables, and I do not know how to incorporate a GLM-encoded categorical variable into my work.
Is there a better way than Information Value to transform a categorical variable into a continuous variable?
How does Enterprise Miner process categorical variables? Does EM convert a categorical variable into a real-valued variable and then use the real values in splitting a target variable?
One option is to use the regression node (or something fancier, if you prefer), and predict the target variable just using your single class variable. Then, add a transform variables node that creates a new variable equal to p_target (the result of the model). I also like to drop the various other variables created by the regression. You can also do this outside EM, and paste the result into a transform variables node.
For examples, you can see my SAS Global Forum paper and/or brainshark presentation:
http://support.sas.com/resources/papers/proceedings12/126-2012.pdf
One option is to use the regression node (or something fancier, if you prefer), and predict the target variable just using your single class variable. Then, add a transform variables node that creates a new variable equal to p_target (the result of the model). I also like to drop the various other variables created by the regression. You can also do this outside EM, and paste the result into a transform variables node.
For examples, you can see my SAS Global Forum paper and/or brainshark presentation:
http://support.sas.com/resources/papers/proceedings12/126-2012.pdf
This is an informed answer. Thank you, Mr. Levine.
Upon reflection, I could also expand the categorical variable into each of its levels using GLM encoding and create a binary indicator vector for each observation where the class level indicator would be set to 1 and all other indicator values would be set to 0. Then, I could run a principal components analysis on the variable and take the first principal component value, which would represent the projection of the variable along the axis of maximum variance and hence explanatory power.
Regardless of technique, however, I would have to create a framework (did someone say "Write a SAS macro"?) to apply this technique to every categorical variable to be encoded. But this would be not a significant task to perform.
A related question is: If I use the target (dependent) variable information in constructing the encoded representation of the categorical variable, am I introducing bias into the solution? Bias would distort the modeling results, and could come from dependencies in the data introduced by sampling, for example. Perhaps using target information is not a recommended practice. What do we think about this in general?
Hi Tenno.
You are right to to think carefully around those issues. There is a good chance that using a preliminary regression parameter in a subsequent regression will introduce severe bias into your final inferences. I have done what you are thinking of along the lines of principle components quite successfully, but I would strongly recommend using PROC CORRESP to produce each score.
Thanks, Damien. It is very important not to introduce new errors that may confound the results into a problem which one is trying to solve.
Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!
Use this tutorial as a handy guide to weigh the pros and cons of these commonly used machine learning algorithms.
Find more tutorials on the SAS Users YouTube channel.