<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Recommendations for Data reading in SAS Programming</title>
    <link>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110297#M22881</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;The most basic things you might want to check:&lt;/P&gt;&lt;P&gt;- missings&lt;/P&gt;&lt;P&gt;- data out of a valid range (if there is a valid range defined)&lt;/P&gt;&lt;P&gt;- invalid characters (eg. control characters)&lt;/P&gt;&lt;P&gt;- duplicate keys or rows - (only if this is still possible even though you are sourcing from dbms - eg. when combining tables from 2 different dbms).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;And for look-ups&lt;/P&gt;&lt;P&gt;- no match over look-up key (if there must be a match, eg. to a row in a reference table).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To get a feel how data quality is: Proc Freq and Proc Univariate are quite helpful for this. Eg. a simple frequency count over your variables and then "eyeballing" the cases with low frequency (let's say product names and you find names which only occur a few times in a data set with millions of rows).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;"But I would need more examples like this to avoid any future possibilities to commit any errors."&lt;/P&gt;&lt;P&gt;If data quality is not good then it's very hard to foresee all future possible cases. I would concentrate on the business keys so that bad data there can't mess up relationships (and you suddenly end up with many-to-many joins).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Mon, 31 Dec 2012 05:37:17 GMT</pubDate>
    <dc:creator>Patrick</dc:creator>
    <dc:date>2012-12-31T05:37:17Z</dc:date>
    <item>
      <title>Recommendations for Data reading</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110292#M22876</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I work on banking data of customers and need to create a datamart for different purposes. we read data from different sources and it might not be properly cleaned and inserted with accuracy.&lt;/P&gt;&lt;P&gt;I derive variables as well from existing one.&lt;/P&gt;&lt;P&gt;My question is related to reading data in a way that makes it most accurate.&lt;/P&gt;&lt;P&gt;For example using upcase function while reading variables with possibility of upcase or lowcase input for deriving variables.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;what would be your recommendations?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 29 Dec 2012 05:09:14 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110292#M22876</guid>
      <dc:creator>bnarang</dc:creator>
      <dc:date>2012-12-29T05:09:14Z</dc:date>
    </item>
    <item>
      <title>Re: Recommendations for Data reading</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110293#M22877</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Look at functions COMPLEV, COMPGED, SPEDIS and SOUNDEX as ways to find approximate matches.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;PG &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 30 Dec 2012 02:59:02 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110293#M22877</guid>
      <dc:creator>PGStats</dc:creator>
      <dc:date>2012-12-30T02:59:02Z</dc:date>
    </item>
    <item>
      <title>Re: Recommendations for Data reading</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110294#M22878</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Bhpinder;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I also work with big data. To read data I always use the following methodology.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;data test ;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; infile "file_name.csv." delimiter = ',' MISSOVER DSD lrecl=32767&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; firstobs=2 ;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; input &lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : $10.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; occupation&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : $50.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; item&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : $25.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; run;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This way you can always be sure that all the data are properly read with apprpriate format.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Rgds.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 30 Dec 2012 15:05:42 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110294#M22878</guid>
      <dc:creator>newbie_ari</dc:creator>
      <dc:date>2012-12-30T15:05:42Z</dc:date>
    </item>
    <item>
      <title>Re: Recommendations for Data reading</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110295#M22879</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;The questions I would ask:&lt;/P&gt;&lt;P&gt;- What is the impact if data is not "clean"?&lt;/P&gt;&lt;P&gt;- Do I have a requirement to cleanse the data (and is there time/money planned for this)?&lt;/P&gt;&lt;P&gt;- Where should/can the data cleansing happen (as early as possible - but what is realistic)?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The first thing I would look at are the business keys. If this ones are not clean then you must find a way to cleanse them (or get the source systems to cleanse them). If these are character fields then I normally compress and upcase such keys for joining (and then add a surrogate key to the target table so I don't have to care about missmatching business keys later on).&lt;/P&gt;&lt;P&gt;If things are really bad then you can also consider using match codes.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If you need to cleanse the data: Be prepared that this can be quite a bit of work.&lt;/P&gt;&lt;P&gt;There is Ron Cody's book "Data Cleaning Techniques" which might be helpful &lt;A href="http://support.sas.com/publishing/authors/cody.html" title="http://support.sas.com/publishing/authors/cody.html"&gt;SAS Press - Ron Cody Author Page&lt;/A&gt; and there is of course also DataFlux for such tasks.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;And for your ETL design: Design for Data Validation steps creating Error and Exception tables to capture issues. And also plan for the actions to be taken if issues are found (eg. stop load and send emails/reports on issues, or continue load or ....). So you need here to plan for technical processes but also for business processes (who monitors these tables, who needs to take action and which action).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 31 Dec 2012 01:42:02 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110295#M22879</guid>
      <dc:creator>Patrick</dc:creator>
      <dc:date>2012-12-31T01:42:02Z</dc:date>
    </item>
    <item>
      <title>Re: Recommendations for Data reading</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110296#M22880</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; Thanks Patrick. Basically, I work on creating a sas process for datamart production which fetch historical data upto100 million and some times more than that and different sources (But not any txt or csv) always from dbms. But realized the loop holes and data issues.&lt;/P&gt;&lt;P&gt;While reading the data, we must apply some checks to make sure whatever we are reading or deriving from there cannot go wrong. The example I mentioned while deriving to use Upcasing is the realization I made recently I found error in the output. I need to use Upcase function as well.&lt;/P&gt;&lt;P&gt;But I would need more examples like this to avoid any future possibilities to commit any errors.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I can possibly look into the books you have referred.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Bhupinder&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 31 Dec 2012 04:04:57 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110296#M22880</guid>
      <dc:creator>bnarang</dc:creator>
      <dc:date>2012-12-31T04:04:57Z</dc:date>
    </item>
    <item>
      <title>Re: Recommendations for Data reading</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110297#M22881</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;The most basic things you might want to check:&lt;/P&gt;&lt;P&gt;- missings&lt;/P&gt;&lt;P&gt;- data out of a valid range (if there is a valid range defined)&lt;/P&gt;&lt;P&gt;- invalid characters (eg. control characters)&lt;/P&gt;&lt;P&gt;- duplicate keys or rows - (only if this is still possible even though you are sourcing from dbms - eg. when combining tables from 2 different dbms).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;And for look-ups&lt;/P&gt;&lt;P&gt;- no match over look-up key (if there must be a match, eg. to a row in a reference table).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To get a feel how data quality is: Proc Freq and Proc Univariate are quite helpful for this. Eg. a simple frequency count over your variables and then "eyeballing" the cases with low frequency (let's say product names and you find names which only occur a few times in a data set with millions of rows).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;"But I would need more examples like this to avoid any future possibilities to commit any errors."&lt;/P&gt;&lt;P&gt;If data quality is not good then it's very hard to foresee all future possible cases. I would concentrate on the business keys so that bad data there can't mess up relationships (and you suddenly end up with many-to-many joins).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 31 Dec 2012 05:37:17 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110297#M22881</guid>
      <dc:creator>Patrick</dc:creator>
      <dc:date>2012-12-31T05:37:17Z</dc:date>
    </item>
    <item>
      <title>Re: Recommendations for Data reading</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110298#M22882</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thanks for reply. It makes lot of sense first analyse your input data with basic procedures like freq and univariate before blindly processing them.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 30 Jan 2013 02:51:14 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Recommendations-for-Data-reading/m-p/110298#M22882</guid>
      <dc:creator>bnarang</dc:creator>
      <dc:date>2013-01-30T02:51:14Z</dc:date>
    </item>
  </channel>
</rss>

