Programming the statistical procedures from SAS

Solved
Contributor
Posts: 24

Dear All

If outlier is identified in the data and is not due to error, what is the common way to do with those outliers? Is Proc Robustreg always the better way to go than simply removing the outliers?

Thank you very much for your input.

Accepted Solutions
Solution
‎05-16-2012 04:05 PM
Posts: 5,056

Robust regression is just one way to downplay the influence of observations with extreme residuals. You can do it on your own by giving less weight to outlying observations (use the WEIGHT statement) or by removing the observation (equivalent to o weight of zero). If these observations are important in the phenomenon that you are modelling, then it might be better to investigate further. This might leed you to transform your data (e.g. log transformation) or to add extra terms in your model to account for their possible occurence.

Your model is a simplification of reality. It is up to you to decide how much outliers should be accounted for.

PG

PG

All Replies
Solution
‎05-16-2012 04:05 PM
Posts: 5,056

Robust regression is just one way to downplay the influence of observations with extreme residuals. You can do it on your own by giving less weight to outlying observations (use the WEIGHT statement) or by removing the observation (equivalent to o weight of zero). If these observations are important in the phenomenon that you are modelling, then it might be better to investigate further. This might leed you to transform your data (e.g. log transformation) or to add extra terms in your model to account for their possible occurence.

Your model is a simplification of reality. It is up to you to decide how much outliers should be accounted for.

PG

PG
Contributor
Posts: 24

Thank you very much for help.

Posts: 2,655

I want to echo Perre's thinking, and say: Always be thinking about outliers.  Provided that they are not blunders in recording data, they are the unusual--those cases that your model does NOT explain.  It always bothered me (except in analytical chemistry settings) that the tendency amongst many data analysts was to throw out those observations that make the preconceived model misbehave, rather than follow up on those points and see if there is anything remarkable about them.  To me it would be like a coin collector discarding any coin minted before, say 400 CE, simply because 99.99% of all coins have been minted since that date.  If outlier data are found in a well-controlled and designed study, there is SOMETHING VERY IMPORTANT happening.

Steve Denham

🔒 This topic is solved and locked.