<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Neural Network in EM 6.2 performing materially worse than other approaches in SAS Data Science</title>
    <link>https://communities.sas.com/t5/SAS-Data-Science/Neural-Network-in-EM-6-2-performing-materially-worse-than-other/m-p/369868#M5511</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a data for prediction (binary target)&amp;nbsp;with 80K observation and 100 input variables. Methods like Gradient Boosting fit the data quite well with a validation Gini of over 70%.&amp;nbsp; When I fit a Neural Network with all 100 variables, I&amp;nbsp;get a Gini of around 15% (both training and validation). When I do a variable selection and use 25-odd variables in NN, the validation Gini increases to 30% - which is still&amp;nbsp;materially worse than the other models. I tried the default NN in EM 6.2 with the following changes:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. Architecture : MLP&lt;/P&gt;&lt;P&gt;2. #Hidden Units : 2, 3, 5, 10, 20&lt;/P&gt;&lt;P&gt;3. Decay : 0, 0.05, 0.1, 0.5, 1, 5, 10, 25, 50. Decay seems to hardly impact model performance.&lt;/P&gt;&lt;P&gt;4. Standardization : Standard Deviation and Range&lt;/P&gt;&lt;P&gt;5. &lt;STRONG&gt;Sufficient #iterations to ensure model convergence&lt;/STRONG&gt;. No other changes to optimization properties.&lt;/P&gt;&lt;P&gt;(Have also tried to play with some other properties like RBU, Act Function, Combination functions, direct connections&amp;nbsp;etc without any material change in model)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Clearly the model is converging to a local minima; the 25-variable model to a slightly better minima.&amp;nbsp;Am I missing some basic setting/feature which is leading to such poor NN models?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Nil&lt;/P&gt;</description>
    <pubDate>Fri, 23 Jun 2017 13:32:56 GMT</pubDate>
    <dc:creator>mnil</dc:creator>
    <dc:date>2017-06-23T13:32:56Z</dc:date>
    <item>
      <title>Neural Network in EM 6.2 performing materially worse than other approaches</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Neural-Network-in-EM-6-2-performing-materially-worse-than-other/m-p/369868#M5511</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a data for prediction (binary target)&amp;nbsp;with 80K observation and 100 input variables. Methods like Gradient Boosting fit the data quite well with a validation Gini of over 70%.&amp;nbsp; When I fit a Neural Network with all 100 variables, I&amp;nbsp;get a Gini of around 15% (both training and validation). When I do a variable selection and use 25-odd variables in NN, the validation Gini increases to 30% - which is still&amp;nbsp;materially worse than the other models. I tried the default NN in EM 6.2 with the following changes:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. Architecture : MLP&lt;/P&gt;&lt;P&gt;2. #Hidden Units : 2, 3, 5, 10, 20&lt;/P&gt;&lt;P&gt;3. Decay : 0, 0.05, 0.1, 0.5, 1, 5, 10, 25, 50. Decay seems to hardly impact model performance.&lt;/P&gt;&lt;P&gt;4. Standardization : Standard Deviation and Range&lt;/P&gt;&lt;P&gt;5. &lt;STRONG&gt;Sufficient #iterations to ensure model convergence&lt;/STRONG&gt;. No other changes to optimization properties.&lt;/P&gt;&lt;P&gt;(Have also tried to play with some other properties like RBU, Act Function, Combination functions, direct connections&amp;nbsp;etc without any material change in model)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Clearly the model is converging to a local minima; the 25-variable model to a slightly better minima.&amp;nbsp;Am I missing some basic setting/feature which is leading to such poor NN models?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Nil&lt;/P&gt;</description>
      <pubDate>Fri, 23 Jun 2017 13:32:56 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Neural-Network-in-EM-6-2-performing-materially-worse-than-other/m-p/369868#M5511</guid>
      <dc:creator>mnil</dc:creator>
      <dc:date>2017-06-23T13:32:56Z</dc:date>
    </item>
    <item>
      <title>Re: Neural Network in EM 6.2 performing materially worse than other approaches</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Neural-Network-in-EM-6-2-performing-materially-worse-than-other/m-p/370532#M5524</link>
      <description>&lt;P&gt;Just realized through trial and error that Neural Network node in SAS EM 6.2 can not handle missing values.&amp;nbsp;Imputing the missing values in my data resolved the issue.&amp;nbsp;Would be greatful if someone can provide any SAS documentation on how NN node in SAS EM processes missing data.&lt;/P&gt;</description>
      <pubDate>Mon, 26 Jun 2017 13:21:11 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Neural-Network-in-EM-6-2-performing-materially-worse-than-other/m-p/370532#M5524</guid>
      <dc:creator>mnil</dc:creator>
      <dc:date>2017-06-26T13:21:11Z</dc:date>
    </item>
  </channel>
</rss>

