<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Oracle v SAS UTF-8 v Hadoop in SAS Data Management</title>
    <link>https://communities.sas.com/t5/SAS-Data-Management/Oracle-v-SAS-UTF-8-v-Hadoop/m-p/561533#M17191</link>
    <description>&lt;P&gt;I'll play around with the multiplier option. Thank you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As for the second point - I think you missed what I said. Hadoop is weird, and it's weird &lt;I&gt;because you can't&amp;nbsp;update data. &lt;/I&gt;I know that. And since the design, which I have no say over, already calls for SCD2 tables in Hadoop, I had no choice but to write transformations to handle updating existing tables with changing source data. What you're suggesting would make an already complicated process even worse.&lt;/P&gt;</description>
    <pubDate>Sat, 25 May 2019 04:13:28 GMT</pubDate>
    <dc:creator>LaurieF</dc:creator>
    <dc:date>2019-05-25T04:13:28Z</dc:date>
    <item>
      <title>Oracle v SAS UTF-8 v Hadoop</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Oracle-v-SAS-UTF-8-v-Hadoop/m-p/560340#M17161</link>
      <description>&lt;UL&gt;
&lt;LI&gt;The Oracle source schema is defined as utf-8&lt;/LI&gt;
&lt;LI&gt;The SAS Oracle libname to the schema has&amp;nbsp;&lt;EM&gt;encoding="utf-8"&amp;nbsp;&lt;/EM&gt;on it&lt;/LI&gt;
&lt;LI&gt;SAS is running with utf-8 encoding&lt;/LI&gt;
&lt;LI&gt;Hadoop naturally runs with utf-8&lt;/LI&gt;
&lt;LI&gt;Because Hadoop is weird, I've got user-written transformations to do SCD-2 processing&lt;/LI&gt;
&lt;LI&gt;Because Hadoop is blindingly fast, rebuilding the target table from scratch appears so far to be an extremely efficient way of moving data around. But Hadoop is weird.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;When I register the metadata to an Oracle table, it shows the column length for character variables as four times the varchar2 length - to support the maximum length a variable can take. The Hadoop table registration shows the (utf-8) character length.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Assume a&amp;nbsp;&lt;EM&gt;varchar2(512)&amp;nbsp;&lt;/EM&gt;in Oracle going to&amp;nbsp;&lt;EM&gt;varchar(512)&lt;/EM&gt; in Hadoop (initially in Hive, but later processed in Impala/Parquet). The registration for that column shows it as having a length of &lt;EM&gt;2048&lt;/EM&gt;. Shrug - I don't care in this case, because I know there are no special characters. It becomes an issue later with other source data that I don't have any control over. I need to see the data to check what's going on in that case. But not today.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When I do a bog-standard SAS/SQL insert of it into the Hive table, for this status_message column I'm getting:&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;WARNING: Character expression will be truncated when assigned to character column status_message.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is a message I'd expect to get with &lt;EM&gt;catx&lt;/EM&gt; in SQL where the variable length exceeds 200 (BTW, SAS, can you fix this?!). But I appreciate in this case that it sees a length of&amp;nbsp;&lt;EM&gt;2048&lt;/EM&gt; which it's trying to squeeze into&amp;nbsp;&lt;EM&gt;512,&lt;/EM&gt; and SAS semi-helpfully wants me to know about it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What to do? what to do? I can make the target table have the exploded lengths, but that's cheating. I don't want to have to cheat. I want to be able to avoid all warning messages if I can, and obviously preserve data integrity. What happens when I &lt;EM&gt;do&lt;/EM&gt; have UTF-8 characters, like Te Reo M&lt;SPAN style="display: inline !important; float: none; background-color: #ffffff; color: #545454; font-family: arial,sans-serif; font-size: 13.33px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; line-height: 1.54; orphans: 2; overflow-wrap: break-word; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"&gt;ā&lt;/SPAN&gt;ori macrons, or emoji characters (apparently, there will be some!)?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Laurie&lt;/P&gt;</description>
      <pubDate>Tue, 21 May 2019 03:50:32 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Oracle-v-SAS-UTF-8-v-Hadoop/m-p/560340#M17161</guid>
      <dc:creator>LaurieF</dc:creator>
      <dc:date>2019-05-21T03:50:32Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle v SAS UTF-8 v Hadoop</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Oracle-v-SAS-UTF-8-v-Hadoop/m-p/560687#M17173</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/17429"&gt;@LaurieF&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You're raising here a few different things so let me just throw in a few thoughts/answers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Multiplied Lengths&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;SAS defines character lengths in number of bytes and not number of characters. Most databases define (or allow to define) lengths in number of characters.&lt;/P&gt;
&lt;P&gt;UFT-8 can use up to 4 bytes of storage for a single character.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To avoid string truncation when moving data from a database to a SAS table, SAS creates SAS character variables with a multiplied length &lt;STRONG&gt;BUT&lt;/STRONG&gt;&amp;nbsp;there are libname options which let you change the default behavior (i.e. multiplier=...) for access to hadoop.&lt;/P&gt;
&lt;P&gt;If you know that there won't be any multibyte characters in source then set the multiplier option to 1.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Because Hadoop is weird, I've got user-written transformations to do SCD-2 processing&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;HDFS is an immutable file system meaning you can only write once to a file. So: Insert and Read is as anywhere else, update and delete requires a full recreation of the file on HDFS.&lt;/P&gt;
&lt;P&gt;I didn't get my hands on it yet but I feel that SCD2 using Hadoop should use a bit a different table load and data organisation as you would do normally.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So... What if you insert only for any change. You never update a record but you just add a new row even if it's only an expiration (like source in target but not in source). An expiration is then simply to take the most current record from target and insert it again with a modified change end date.&lt;/P&gt;
&lt;P&gt;You then define a Hive view over this SCD2 table in Hadoop with a Window which returns per key only the row with the latest change begin date.&lt;/P&gt;
&lt;P&gt;And then maybe also implement a housekeeping process which from time to time adds history records to a history table and then recreates the active SCD2 tables with current records only (or with history records back to a certain time only).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 21 May 2019 23:18:46 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Oracle-v-SAS-UTF-8-v-Hadoop/m-p/560687#M17173</guid>
      <dc:creator>Patrick</dc:creator>
      <dc:date>2019-05-21T23:18:46Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle v SAS UTF-8 v Hadoop</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Oracle-v-SAS-UTF-8-v-Hadoop/m-p/561533#M17191</link>
      <description>&lt;P&gt;I'll play around with the multiplier option. Thank you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As for the second point - I think you missed what I said. Hadoop is weird, and it's weird &lt;I&gt;because you can't&amp;nbsp;update data. &lt;/I&gt;I know that. And since the design, which I have no say over, already calls for SCD2 tables in Hadoop, I had no choice but to write transformations to handle updating existing tables with changing source data. What you're suggesting would make an already complicated process even worse.&lt;/P&gt;</description>
      <pubDate>Sat, 25 May 2019 04:13:28 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Oracle-v-SAS-UTF-8-v-Hadoop/m-p/561533#M17191</guid>
      <dc:creator>LaurieF</dc:creator>
      <dc:date>2019-05-25T04:13:28Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle v SAS UTF-8 v Hadoop</title>
      <link>https://communities.sas.com/t5/SAS-Data-Management/Oracle-v-SAS-UTF-8-v-Hadoop/m-p/561539#M17193</link>
      <description>&lt;P&gt;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/17429"&gt;@LaurieF&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;According to documentation Update, Delete and Merge is implemented in Hive and you can use it if ACID is turned on.&amp;nbsp;&lt;A href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML" target="_blank" rel="noopener"&gt;https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Because HDFS is an immutable file system Insert operations will perform much better.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;All that I'm saying is that you've got two choices when going for an Insert only approach:&lt;/P&gt;
&lt;P&gt;Either a full table replace or then Inserts only for any change with some sort of record versioning (which can also use date columns). And to ease any queries on this table with version records then also a Hive view over the table with a partition clause to only return the most current version of a record.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 25 May 2019 06:24:26 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Management/Oracle-v-SAS-UTF-8-v-Hadoop/m-p/561539#M17193</guid>
      <dc:creator>Patrick</dc:creator>
      <dc:date>2019-05-25T06:24:26Z</dc:date>
    </item>
  </channel>
</rss>

