05-19-2016 10:00 AM - last edited on 11-14-2016 11:49 AM by AnnaBrown
Making machine learning more interpretable
Machine learning capabilities have been available for years (even decades), and they are becoming much more mainstream now. However, one nagging problem with applying machine learning algorithms in regulated industries is the difficulties associated with interpreting how machine learning models make their decisions. I believe this is a fundamental problem that won't be solved outright anytime soon, but I've gathered some tips on how to make machine learning more interpretable from working with SAS customers all over the world. Take a look: https://www.oreilly.com/ideas/predictive-modeling-striking-a-balance-between-accuracy-and-interpreta....
Why does interpretability even matter? My colleague @andrew_pease123 answers that question here:
Want to know more about machine learning?
Check out this GitHub repo with technical best practices resources including quick reference tables and a thorough best practices guide for applied machine learning: https://github.com/sassoftware/enlighten-apply/tree/master/ML_tables. To learn more about machine learning from a business perspective see this SAS and O'Reilly co-sponsored report:
05-19-2016 10:22 AM
Thanks for the shout out Patrick! Also worth mentioning the O'reilly webex we did on Machine Learning. Recording available here
Also related is the question of scalability in analytics and creating automated feedback loops. I riff on that theme here http://blogs.sas.com/content/sascom/2016/04/04/what-is-scale/