<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Sampling : Gradient Boosting Tree in SAS Data Science</title>
    <link>https://communities.sas.com/t5/SAS-Data-Science/Sampling-Gradient-Boosting-Tree/m-p/208140#M2827</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have a question regarding the algorithm of Gradient Boosting Tree. I understand &lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;Simple tree is built for only a randomly selected &lt;/SPAN&gt;&lt;SPAN style="font-family: inherit; font-size: inherit; line-height: 1.5em; font-style: inherit;"&gt;&lt;STRONG&gt;sub sample &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;of the full data set (&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit; font-size: inherit; line-height: 1.5em; font-style: inherit;"&gt;&lt;STRONG&gt;random without replacement). &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;Each consecutive tree is built for the prediction residuals (from all previous trees) of an independently drawn random sample&lt;/SPAN&gt;&lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;.If an observation is incorrectly classified, then the observation will receive more weight in the next iteration so that it gets properly classified. &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My question - &lt;STRONG&gt; &lt;/STRONG&gt;How thousands of trees are built if each tree is built on different observations altogether (random without replacement)? For example, i have a data set consisting of 20,000 records. If i take 0.5 as &lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;the fraction of the training set observations randomly selected to propose the next tree in the expansion, how would i be able to create more than 2 trees? My understanding - 50% of the data set is used in building the first tree and the remaining 50% in building the next tree as samples are drawn without replacement. I apologize if it sounds lame. I am very confused in the tree building process.&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Wed, 26 Aug 2015 19:26:02 GMT</pubDate>
    <dc:creator>Ujjawal</dc:creator>
    <dc:date>2015-08-26T19:26:02Z</dc:date>
    <item>
      <title>Sampling : Gradient Boosting Tree</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Sampling-Gradient-Boosting-Tree/m-p/208140#M2827</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have a question regarding the algorithm of Gradient Boosting Tree. I understand &lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;Simple tree is built for only a randomly selected &lt;/SPAN&gt;&lt;SPAN style="font-family: inherit; font-size: inherit; line-height: 1.5em; font-style: inherit;"&gt;&lt;STRONG&gt;sub sample &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;of the full data set (&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit; font-size: inherit; line-height: 1.5em; font-style: inherit;"&gt;&lt;STRONG&gt;random without replacement). &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;Each consecutive tree is built for the prediction residuals (from all previous trees) of an independently drawn random sample&lt;/SPAN&gt;&lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;.If an observation is incorrectly classified, then the observation will receive more weight in the next iteration so that it gets properly classified. &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My question - &lt;STRONG&gt; &lt;/STRONG&gt;How thousands of trees are built if each tree is built on different observations altogether (random without replacement)? For example, i have a data set consisting of 20,000 records. If i take 0.5 as &lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;the fraction of the training set observations randomly selected to propose the next tree in the expansion, how would i be able to create more than 2 trees? My understanding - 50% of the data set is used in building the first tree and the remaining 50% in building the next tree as samples are drawn without replacement. I apologize if it sounds lame. I am very confused in the tree building process.&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 26 Aug 2015 19:26:02 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Sampling-Gradient-Boosting-Tree/m-p/208140#M2827</guid>
      <dc:creator>Ujjawal</dc:creator>
      <dc:date>2015-08-26T19:26:02Z</dc:date>
    </item>
    <item>
      <title>Re: Sampling : Gradient Boosting Tree</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Sampling-Gradient-Boosting-Tree/m-p/208141#M2828</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hey Ujjawal,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To answer your specific question, at each step you are using a random sample without replacement but that does not mean that the number of available observations decreases. In your example of 20 000 observations and 0.5 training fraction, at each step you re-weight the 20 000 observations, but only use 10 000 to train your tree at each step. This number is constant through all the boosting exercise. You sample 10 000 observations without replacement every time, but you still have 20 000 to sample from at each step.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Further discussion about training sample&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;You have the right idea. At each step of boosting, a new tree is trained with a &lt;STRONG&gt;different&lt;/STRONG&gt; sample of re-weighted observations (whose weights are based on residuals). Jerome Friedman tried different values of training samples, no sampling included. He found experimentally that "stochastic" boosting was more accurate than just boosting. Instead of using the entire re-weighed data set to train a tree (weak classifier), he tried several training fractions and found out that both small and large data sets had an improvement in error for training fractions on the range [0.4, 0.8].&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Take a look at J. Friedman's paper &lt;A href="https://statweb.stanford.edu/~jhf/ftp/stobst.pdf"&gt;Stochastic Gradient Boosting&lt;/A&gt;. He goes into more detail about a stochastic gradient boosting used for regression. Look at figures 1 and 2 and their discussion.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also try Gradient Boosting models in SAS Enterprise Miner with different values of training sample. You might find a case where training sample of 100 gives you a better model. But those cases are rare! The default training sample of 0.6 usually works great for the default maximum depth of 2.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If I had to rewrite this paper &lt;A href="http://support.sas.com/resources/papers/proceedings14/SAS133-2014.pdf" title="http://support.sas.com/resources/papers/proceedings14/SAS133-2014.pdf"&gt;Leveraging Ensemble Models in SAS® Enterprise Miner™&lt;/A&gt; , I would definitely beef up the discussion between boosting and stochastic gradient boosting.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I hope this helps!&lt;/P&gt;&lt;P&gt;-Miguel&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 Aug 2015 19:10:29 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Sampling-Gradient-Boosting-Tree/m-p/208141#M2828</guid>
      <dc:creator>M_Maldonado</dc:creator>
      <dc:date>2015-08-27T19:10:29Z</dc:date>
    </item>
  </channel>
</rss>

