I'm performing a cluster analysis on a health insurance dataset (using proc distance and proc cluster) containing 4,343 observations with mixed continuous and binary variables.
I understand the importance of standardizing continuous variables. However, given the wide range of values for some of my continuous variables (notably outlier values for hospital visit counts and total medical expenses) I'm *still* seeing maximum z-score values of 15 or higher for standardized continuous variables compared with maximum values of 1 for unstandardized binary variables.
Should binary variables be standardized as well to prevent undue weight being placed on continuous variables?
For example, rare binary events such as MED_STROKE=1 (only 7 cases) would receive a standardized value of 24.9 given their "distance" from the mean value of MED_STROKE, which is close to zero.
How much have you explored the options for the VAR statement in Proc Distance?
How much have you explored the options for the VAR statement in Proc Distance?
I'm aware there are a range of standardization options - I'm considering calculating a simple z-score ( the std=Std option in the proc distnace var line) as a measure of the "distance" between the x=0 and x=1 observations in binary variables.
Good news: We've extended SAS Hackathon registration until Sept. 12, so you still have time to be part of our biggest event yet – our five-year anniversary!
Use this tutorial as a handy guide to weigh the pros and cons of these commonly used machine learning algorithms.
Find more tutorials on the SAS Users YouTube channel.