<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Benchmark Study from SAS in SAS Data Science</title>
    <link>https://communities.sas.com/t5/SAS-Data-Science/Benchmark-Study-from-SAS/m-p/96121#M9234</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt; Hi,&lt;/P&gt;&lt;P&gt;Perhaps knowing that standardized training and validation data sets stratified by the target were used across the model test suite mean that all of the models for each package were evaluated using the same data. &lt;/P&gt;&lt;P&gt;And as cited: model quality was assessed through common model quality measures (Han &amp;amp; Kamber 2006), ie.&lt;/P&gt;&lt;P&gt;•&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cumulative lift in the first decile,&amp;nbsp; &lt;/P&gt;&lt;P&gt;•&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; percentage of correctly classified events (often called event precision), and&amp;nbsp; &lt;/P&gt;&lt;P&gt;•&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; overall percentage of correct classification&amp;nbsp; &lt;/P&gt;&lt;P&gt;Since the analysis used historical data, the event value for the target is known. You need historical data with known values to do predictive modeling. The predictions from each model were then compared on the KNOWN common validation data to evaluate model quality using the statistics above.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As a side note - oftentimes there is an improvement in predictive model performance with the inclusion of variables derived from text data.&amp;nbsp; At last years Analytics 2012 event, United Health Group - indicated that they generally found that predictive models improved significantly when variables from text data were added to the algorithms - citing for example that the missclassification rate (from 30% to 10%).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Fri, 08 Feb 2013 14:50:54 GMT</pubDate>
    <dc:creator>FionaMcNeill</dc:creator>
    <dc:date>2013-02-08T14:50:54Z</dc:date>
    <item>
      <title>Benchmark Study from SAS</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Benchmark-Study-from-SAS/m-p/96120#M9233</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;The following products are analyzed in this benchmark study: &lt;/P&gt;&lt;P&gt;&lt;A href="http://support.sas.com/resources/papers/Benchmark_R_Mahout_SAS.pdf" title="http://support.sas.com/resources/papers/Benchmark_R_Mahout_SAS.pdf"&gt;http://support.sas.com/resources/papers/Benchmark_R_Mahout_SAS.pdf&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;SAS High-Performance Analytics Server 12.1 (using&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Hadoop); SAS Enterprise Miner 12.1 client&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Rapid Predictive Modeler for SAS Enterprise Miner and&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; SAS Enterprise Miner 12.1, SAS 9.3&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;R 2.15.1 “Roasted Marshmallows” version (64-bit)&lt;/LI&gt;&lt;/UL&gt;&lt;UL&gt;&lt;LI&gt;Mahout 7.0&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The SAS applications are commercial products. The R2.15.1 and the Mahout 7.0 are open source. There were three methods that were used across all of the applications to build the models: logistic regression, decision tree, and random forest. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;One thing that was not clear from this study was how the results were derived. Although under the section called "Model Quality" the study states that “Standardized training and validation data sets stratified by the target were used across the model test suite,” it’s not clear as to what standards were applied. Were the standards based on how a human would classify the events or by some other method? If so, what was the method? &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 04 Feb 2013 01:41:21 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Benchmark-Study-from-SAS/m-p/96120#M9233</guid>
      <dc:creator>JuliaM</dc:creator>
      <dc:date>2013-02-04T01:41:21Z</dc:date>
    </item>
    <item>
      <title>Re: Benchmark Study from SAS</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Benchmark-Study-from-SAS/m-p/96121#M9234</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt; Hi,&lt;/P&gt;&lt;P&gt;Perhaps knowing that standardized training and validation data sets stratified by the target were used across the model test suite mean that all of the models for each package were evaluated using the same data. &lt;/P&gt;&lt;P&gt;And as cited: model quality was assessed through common model quality measures (Han &amp;amp; Kamber 2006), ie.&lt;/P&gt;&lt;P&gt;•&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cumulative lift in the first decile,&amp;nbsp; &lt;/P&gt;&lt;P&gt;•&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; percentage of correctly classified events (often called event precision), and&amp;nbsp; &lt;/P&gt;&lt;P&gt;•&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; overall percentage of correct classification&amp;nbsp; &lt;/P&gt;&lt;P&gt;Since the analysis used historical data, the event value for the target is known. You need historical data with known values to do predictive modeling. The predictions from each model were then compared on the KNOWN common validation data to evaluate model quality using the statistics above.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As a side note - oftentimes there is an improvement in predictive model performance with the inclusion of variables derived from text data.&amp;nbsp; At last years Analytics 2012 event, United Health Group - indicated that they generally found that predictive models improved significantly when variables from text data were added to the algorithms - citing for example that the missclassification rate (from 30% to 10%).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 08 Feb 2013 14:50:54 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Benchmark-Study-from-SAS/m-p/96121#M9234</guid>
      <dc:creator>FionaMcNeill</dc:creator>
      <dc:date>2013-02-08T14:50:54Z</dc:date>
    </item>
  </channel>
</rss>

