<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Classification: K nearest neighbors (MBR) in SAS Data Science</title>
    <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165096#M1825</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi, Miguel:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do you know the difference between PROC DISCRIM and MBR node in terms of KNN?&amp;nbsp; I used both but got a totally different result.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I don't know which one to use for scoring my new data now.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Mon, 08 Jun 2015 23:52:45 GMT</pubDate>
    <dc:creator>EricTsai</dc:creator>
    <dc:date>2015-06-08T23:52:45Z</dc:date>
    <item>
      <title>Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165091#M1820</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;SPAN style=": ; font-size: 12pt; font-family: calibri,verdana,arial,sans-serif;"&gt;&lt;SPAN class="hps"&gt;Hi all, &lt;SPAN id="result_box" lang="en"&gt;&lt;SPAN class="hps"&gt;I'm a math student&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;who&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;must&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;pass a &lt;/SPAN&gt;&lt;SPAN class="hps"&gt;data mining exam&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;in a week.&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;I can not fix&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;any&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;point of this exercise&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;using&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;SAS&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;Miner, someone can help me?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style=": ; font-size: 12pt; font-family: calibri,verdana,arial,sans-serif;"&gt;&lt;SPAN class="hps"&gt;&lt;SPAN id="result_box" lang="en"&gt;&lt;SPAN class="hps"&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: calibri,verdana,arial,sans-serif;"&gt;&lt;SPAN id="result_box" lang="en"&gt;&lt;SPAN style="font-size: 12pt;"&gt;&lt;SPAN class="hps"&gt;Create&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;in&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;SAS&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;Enterprise Miner&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;classifier&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;MBR on&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;Intrusion&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;dataset&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;Detection&lt;/SPAN&gt; &lt;SPAN class="hps atn"&gt;(&lt;/SPAN&gt;a .csv file I downloaded from my university website), after: &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="font-size: 12pt;"&gt;&lt;SPAN class="hps"&gt;1.&lt;/SPAN&gt; P&lt;SPAN class="hps"&gt;erformed&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;an exploratory analysis&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;of the&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;data&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;which&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;presented&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;the main results&lt;/SPAN&gt;.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="font-size: 12pt;"&gt;&lt;SPAN class="hps"&gt;2&lt;/SPAN&gt;. &lt;SPAN class="hps"&gt;Partitioned&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;the dataset into&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;training&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;set,&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;test&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;and&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;validation&lt;/SPAN&gt; &lt;SPAN class="hps atn"&gt;set (&lt;/SPAN&gt;50% -30% -20%). &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="font-size: 12pt;"&gt;&lt;SPAN class="hps"&gt;3&lt;/SPAN&gt;. &lt;SPAN class="hps"&gt;Eliminated the&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;non-numerical&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;features&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;and&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;have explained&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;the reasons for such&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;exclusion.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="font-size: 12pt;"&gt;&lt;SPAN class="hps"&gt;4&lt;/SPAN&gt;. &lt;SPAN class="hps"&gt;Chose a&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;value of K&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;(indicate&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;why you chose it&lt;/SPAN&gt;). &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="font-size: 12pt;"&gt;&lt;SPAN class="hps"&gt;5&lt;/SPAN&gt;. &lt;SPAN class="hps"&gt;Calculate&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;the total error&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;of&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;prediction&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;on the test&lt;/SPAN&gt; set.&lt;SPAN class="hps"&gt;&lt;/SPAN&gt; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="font-size: 12pt;"&gt;&lt;SPAN class="hps"&gt;6&lt;/SPAN&gt;. &lt;SPAN class="hps"&gt;Repeat&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;steps 3 and 4&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;by changing the value&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;of K&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;chosen&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;for a total of&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;3&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;iterations.&lt;/SPAN&gt; &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="font-size: 12pt;"&gt;&lt;SPAN class="hps"&gt;7&lt;/SPAN&gt;. &lt;SPAN class="hps"&gt;Viewing&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;the curve&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;of total error&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;varying K&lt;/SPAN&gt;. &lt;SPAN class="hps"&gt;For&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;which value of&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;K&lt;/SPAN&gt; you obtain &lt;SPAN class="hps"&gt;a&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;minor error&lt;/SPAN&gt;? &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="font-size: 12pt;"&gt;&lt;SPAN class="hps"&gt;8&lt;/SPAN&gt;. &lt;SPAN class="hps"&gt;For this&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;value of k&lt;/SPAN&gt;, show &lt;SPAN class="hps"&gt;the&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;confusion matrix&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;on the&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;validation&lt;/SPAN&gt; &lt;/SPAN&gt;&lt;SPAN style="font-size: 12pt;"&gt;set.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: calibri,verdana,arial,sans-serif;"&gt;&lt;SPAN style=": ; font-size: 12pt;"&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: calibri,verdana,arial,sans-serif;"&gt;&lt;SPAN style=": ; font-size: 12pt;"&gt;I have problems mainly with step 3, but also with steps 4 (how can I justify my choice of K?), 5, 7 (how can I see the curve of total error?) and 8 (How can I find the confusion matrix?).&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: calibri,verdana,arial,sans-serif;"&gt;&lt;SPAN style=": ; font-size: 12pt;"&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: calibri,verdana,arial,sans-serif;"&gt;&lt;SPAN style=": ; font-size: 12pt;"&gt;&lt;SPAN id="result_box" lang="en"&gt;&lt;SPAN class="hps"&gt;I apologize&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;for my&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;poor&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;English,&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;and I hope for&lt;/SPAN&gt; &lt;SPAN class="hps"&gt;your help&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: calibri,verdana,arial,sans-serif;"&gt;&lt;SPAN style=": ; font-size: 12pt;"&gt;&lt;SPAN lang="en"&gt;&lt;SPAN&gt;Teodoro&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Jul 2014 07:57:53 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165091#M1820</guid>
      <dc:creator>teodoro_stefanello</dc:creator>
      <dc:date>2014-07-03T07:57:53Z</dc:date>
    </item>
    <item>
      <title>Re: Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165092#M1821</link>
      <description>&lt;P&gt;Hi Teodoro,&lt;/P&gt;
&lt;P&gt;To find a suitable number of nearest neighbors, I would run several MBR nodes with different number of neighbors, and then use a Model Comparison node to compare their fit statistics, and their score distribution. This is just my preference, not sure if there is a more theoretical way to do it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Two options to see the classification matrix:&lt;/P&gt;
&lt;P&gt;1. For any node in the Model tab, you can see the classification matrix in your results. Go ti View-&amp;gt;Assessment-&amp;gt;Classification Chart. &lt;SPAN style="font-size: 10pt; line-height: 1.5em;"&gt;If you want to see the numbers, click on the fourth icon (table button).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;2. Another option, you can connect your MBR to a model comparison node. You will see the classification matrix in the results of your model comparison node.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are several options to eliminate your non-numerical inputs. One of them is to click on the Variables ellipsis (...) in the properties of your MBR node. A menu will open. Specify "Rejected" as the role of all variables that are not your binary target, or your interval inputs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You use the filter node to filter outliers and observations that can throw off your model. Up to you if you want to use it in your MBR flow. More info about Filter node on the reference help.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I hope it helps,&lt;/P&gt;
&lt;P&gt;Miguel&lt;/P&gt;</description>
      <pubDate>Fri, 07 Jul 2017 19:15:05 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165092#M1821</guid>
      <dc:creator>M_Maldonado</dc:creator>
      <dc:date>2017-07-07T19:15:05Z</dc:date>
    </item>
    <item>
      <title>Re: Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165093#M1822</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thank you very much Miguel! Your suggestions are very helpful, now I can do at least half exercise.&lt;/P&gt;&lt;P&gt;Do you know how eliminate non-numerical features from my dataset analysis? I thought to use a Filter node, but I don't know if it's useful, since I must eliminate all non-numerical features, not some of their values. Without this I cannot do the rest of the exercise but explain only the procedure to follow.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Jul 2014 17:21:46 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165093#M1822</guid>
      <dc:creator>teodoro_stefanello</dc:creator>
      <dc:date>2014-07-03T17:21:46Z</dc:date>
    </item>
    <item>
      <title>Re: Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165094#M1823</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I highly recommend you to take the course Advanced Analytics Using SAS Enterprise Miner to learn solid foundations on most Enterprise Miner Analytics tasks.&lt;/P&gt;&lt;P&gt;In the meantime you can read the Getting Started with SAS Enterprise Miner section in the reference help (Help-&amp;gt;Contents menu, or press key F1), and other sections as you need them.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;There are several options to eliminate your non-numerical inputs. One of them is to click on the Variables ellipsis (...) in the properties of your MBR node. A menu will open. Specify "Rejected" as the role of all variables that are not your binary target, or your interval inputs.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You use the filter node to filter outliers and observations that can throw off your model. Up to you if you want to use it in your MBR flow. More info about Filter node on the reference help.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Good luck!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Jul 2014 17:37:43 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165094#M1823</guid>
      <dc:creator>M_Maldonado</dc:creator>
      <dc:date>2014-07-03T17:37:43Z</dc:date>
    </item>
    <item>
      <title>Re: Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165095#M1824</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;You can use Metadata node to drop variables in the middle of EM workflow. You are right that Filter node is to&amp;nbsp; 'cut values' of a variable. Metadata node, as the name suggests, is about managing data sets.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Jason Xin&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 08 Jul 2014 14:39:16 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165095#M1824</guid>
      <dc:creator>JasonXin</dc:creator>
      <dc:date>2014-07-08T14:39:16Z</dc:date>
    </item>
    <item>
      <title>Re: Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165096#M1825</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi, Miguel:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do you know the difference between PROC DISCRIM and MBR node in terms of KNN?&amp;nbsp; I used both but got a totally different result.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I don't know which one to use for scoring my new data now.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 08 Jun 2015 23:52:45 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165096#M1825</guid>
      <dc:creator>EricTsai</dc:creator>
      <dc:date>2015-06-08T23:52:45Z</dc:date>
    </item>
    <item>
      <title>Re: Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165097#M1826</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Miguel, thank you for your answer.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is there a way to program a grid search for K instead of having to manually set different model nodes?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for your reply, best regards.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Ivan.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 10 Jun 2015 16:02:04 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165097#M1826</guid>
      <dc:creator>IvanGV</dc:creator>
      <dc:date>2015-06-10T16:02:04Z</dc:date>
    </item>
    <item>
      <title>Re: Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165098#M1827</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Discrimination analysis assumes you know the outcome to create your model, K nearest neighbour methods assume you don't.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;DA is a supervised learning algorithm while KNN is an unsupervised learning algorithm.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Your new data gets scored with the original method you used.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also, please start your own discussion in the future.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 10 Jun 2015 16:23:05 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165098#M1827</guid>
      <dc:creator>Reeza</dc:creator>
      <dc:date>2015-06-10T16:23:05Z</dc:date>
    </item>
    <item>
      <title>Re: Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165099#M1828</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi, Reeza:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I used PROC DISCRIM with METHOD=NPAR, which in terms gives me KNN (k-nearest neighbors) algorithm. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;KNN is NOT an unsupervised learning algorithm; it is a supervised learning algorithm. &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 10 Jun 2015 17:48:50 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165099#M1828</guid>
      <dc:creator>EricTsai</dc:creator>
      <dc:date>2015-06-10T17:48:50Z</dc:date>
    </item>
    <item>
      <title>Re: Classification: K nearest neighbors (MBR)</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165100#M1829</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I mixed up K-Means and KNN.&amp;nbsp; &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 10 Jun 2015 19:03:42 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Classification-K-nearest-neighbors-MBR/m-p/165100#M1829</guid>
      <dc:creator>Reeza</dc:creator>
      <dc:date>2015-06-10T19:03:42Z</dc:date>
    </item>
  </channel>
</rss>

