<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Merging or updating a datset efficiently in SAS Programming</title>
    <link>https://communities.sas.com/t5/SAS-Programming/Merging-or-updating-a-datset-efficiently/m-p/64202#M13982</link>
    <description>that sounds not too big for an sql join in memory&lt;BR /&gt;
2011-01-11.12:32:00:655 55000.36 09XYZABCD0885 tom&lt;BR /&gt;
= 8 + 8 + 13 + 10 &lt;BR /&gt;
=40 &lt;BR /&gt;
 * 2M (dataset1)&lt;BR /&gt;
+ 30 * 3M (dataset2)&lt;BR /&gt;
&lt;BR /&gt;
around 1.7GB of memory to hold both datasets&lt;BR /&gt;
even modest desktops might offer that.&lt;BR /&gt;
&lt;BR /&gt;
The join can be exact on reference but must round amount before comparing, and can convert the timestamp to text or use intnx() to "move to" the minute. &lt;BR /&gt;
The problem with amount is that is stored with decimal fractions and comparison of these can fail when they appear (rounded to two decimal places) to be the same. The other ways to make exact comparison of AMT is to convert to and integer after multiplying by 100, or convert to text with format z19.2</description>
    <pubDate>Thu, 20 Jan 2011 14:50:00 GMT</pubDate>
    <dc:creator>Peter_C</dc:creator>
    <dc:date>2011-01-20T14:50:00Z</dc:date>
    <item>
      <title>Merging or updating a datset efficiently</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Merging-or-updating-a-datset-efficiently/m-p/64201#M13981</link>
      <description>I have 2 datasets &lt;BR /&gt;
&lt;BR /&gt;
Dataset1&lt;BR /&gt;
----&lt;BR /&gt;
Datestamp                     Amt             reference         userid &lt;BR /&gt;
2011-01-11.12:32:00:655 55000.36 09XYZABCD0885  tom&lt;BR /&gt;
2011-01-11.12:32:44:200 55000.36 09XYZABCD0885  joe&lt;BR /&gt;
2011-01-11.12:32:20:320 55000.36 09XYZABCD0885  dav&lt;BR /&gt;
2011-01-12.10:28:33:228    200.00 09XYZABCD0886  ken&lt;BR /&gt;
&lt;BR /&gt;
&lt;BR /&gt;
Dataset2&lt;BR /&gt;
-----&lt;BR /&gt;
Datestamp		     Amt	 reference&lt;BR /&gt;
2011-01-11.12:32:00:655 55000.36 09XYZABCD0885&lt;BR /&gt;
2011-01-11.12:32:44:001 55000.36 09XYZABCD0885&lt;BR /&gt;
2011-01-11.12:32:21:001 55000.36 09XYZABCD0885&lt;BR /&gt;
2011-01-13.10:28:33:228   201.00 09XYZABCD0886&lt;BR /&gt;
&lt;BR /&gt;
&lt;BR /&gt;
I have to update dataset 2 with the variable userid. The key to map both datasets are datestamp,amt and reference. The tricky part is, not necessarily there would be a exact match for milli seconds and seconds. However, I have to first find if every thing matches if found update that record of dataset2 with user id. else check if there is atleast a match till seconds if found update that record of dataset2 with user id. else check if there is atleast a match till minutes if found update that record of dataset2 with user id. If a record match is not found then write it to error dataset.&lt;BR /&gt;
one might question why can't we just compare up to minutes level since we are updating even if the minute matches but the requirement is that they want to be absolutly certain atleast for most of the records. &lt;BR /&gt;
Can any one let me know a efficient way of acheiving this. dataset1 contains atleast 2 million records and dataset 2 contains atleast 5 million records.</description>
      <pubDate>Wed, 19 Jan 2011 15:11:09 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Merging-or-updating-a-datset-efficiently/m-p/64201#M13981</guid>
      <dc:creator>iamfuvis</dc:creator>
      <dc:date>2011-01-19T15:11:09Z</dc:date>
    </item>
    <item>
      <title>Re: Merging or updating a datset efficiently</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Merging-or-updating-a-datset-efficiently/m-p/64202#M13982</link>
      <description>that sounds not too big for an sql join in memory&lt;BR /&gt;
2011-01-11.12:32:00:655 55000.36 09XYZABCD0885 tom&lt;BR /&gt;
= 8 + 8 + 13 + 10 &lt;BR /&gt;
=40 &lt;BR /&gt;
 * 2M (dataset1)&lt;BR /&gt;
+ 30 * 3M (dataset2)&lt;BR /&gt;
&lt;BR /&gt;
around 1.7GB of memory to hold both datasets&lt;BR /&gt;
even modest desktops might offer that.&lt;BR /&gt;
&lt;BR /&gt;
The join can be exact on reference but must round amount before comparing, and can convert the timestamp to text or use intnx() to "move to" the minute. &lt;BR /&gt;
The problem with amount is that is stored with decimal fractions and comparison of these can fail when they appear (rounded to two decimal places) to be the same. The other ways to make exact comparison of AMT is to convert to and integer after multiplying by 100, or convert to text with format z19.2</description>
      <pubDate>Thu, 20 Jan 2011 14:50:00 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Merging-or-updating-a-datset-efficiently/m-p/64202#M13982</guid>
      <dc:creator>Peter_C</dc:creator>
      <dc:date>2011-01-20T14:50:00Z</dc:date>
    </item>
    <item>
      <title>Re: Merging or updating a datset efficiently</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Merging-or-updating-a-datset-efficiently/m-p/64203#M13983</link>
      <description>Instead of matching milliseconds, then seconds, etc, I suggest you calculate the difference between the time stamps, limiting your matches to those below 60,000 milliseconds if you want (otherwise a record @ 2010dec31:23:59:59.999 will match any record that day instead of matching the 2011jan01:00:00:00.000 record). You can then keep the match with the lowest time difference.&lt;BR /&gt;
&lt;BR /&gt;
SQL can do this obvioulsy using&lt;BR /&gt;
  group by a.DATESTAMP, a.REFERENCE&lt;BR /&gt;
  having calculated DIF_MILLISECOND= min(calculated DIF_MILLISECOND) &lt;BR /&gt;
&lt;BR /&gt;
but SQL is a resource hog and you might have to spit the data into small bits (by reference?) depending on your platform.&lt;BR /&gt;
&lt;BR /&gt;
To limit the size of the fuzzy match data, you can do the exact matches first and work with the rest from then onward:&lt;BR /&gt;
&lt;BR /&gt;
data T2_MATCH T2_NOMATCH;&lt;BR /&gt;
  merge T2(in=A)&lt;BR /&gt;
        T1(drop=AMT in=B);&lt;BR /&gt;
  by DATESTAMP REFERENCE MILLISECOND;&lt;BR /&gt;
  if A;&lt;BR /&gt;
  if B then output T2_MATCH;&lt;BR /&gt;
  else output T2_NOMATCH;&lt;BR /&gt;
run;&lt;BR /&gt;
&lt;BR /&gt;
Instead of SQL, I would try to use a hash table with REFERENCE as the key and the multidata option. You can then calculate the time differences and keep the best match.&lt;BR /&gt;
&lt;BR /&gt;
Work on a subset first to see if this is faster than sql in your case.</description>
      <pubDate>Thu, 20 Jan 2011 23:13:13 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Merging-or-updating-a-datset-efficiently/m-p/64203#M13983</guid>
      <dc:creator>ChrisNZ</dc:creator>
      <dc:date>2011-01-20T23:13:13Z</dc:date>
    </item>
  </channel>
</rss>

