<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: SAS taking lot of time reading observations JSON in SAS Programming</title>
    <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/691182#M210335</link>
    <description>&lt;P&gt;By "concatenate" do you mean that you stack data from different datasets, interleaving by policy_no, and create an indicator from where the observation is? If yes, this can be done in a single data step for all policy_no.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The code as such looks much too complicated, and I am not surprised that the setup of so many individual steps takes so long.&lt;/P&gt;
&lt;P&gt;Next, I would not do such work directly from JSON libraries. Instead load each needed dataset as is from JSON to WORK first, and then proceed from there. Similarly, write the accumulated output &lt;EM&gt;onc&lt;/EM&gt;e at the end of all processing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you provide examples for the incoming datasets and the wanted result from that (for several policy_no), I can help you with improved code.&lt;/P&gt;</description>
    <pubDate>Tue, 13 Oct 2020 09:58:56 GMT</pubDate>
    <dc:creator>Kurt_Bremser</dc:creator>
    <dc:date>2020-10-13T09:58:56Z</dc:date>
    <item>
      <title>SAS taking lot of time reading observations JSON</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690929#M210221</link>
      <description>&lt;P&gt;I have a unique requriment. I have 4 SAS datasets in sas library (p4,A,B,C). I have to read 1 observation from A,do some processing , store in temp dataset. Read 1 observation from B,do some process and store in temp dataset. Read 1 observation from C and do some processing and store in temp dataset. Then write all the temp dataset in JSON format. I am using multiple data steps and PROC JSON. It is taking lot of 'wall clock ' unusually. below is the sample code. Is there any way better to do this to avoid program load delay and voluntary wait.&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;           LIBNAME JSON 'JSON'; 
  %DO I = &amp;amp;fstobs %to &amp;amp;COUNT;
  data a1;                                   
  set JSON.policy4(FIRSTOBS=&amp;amp;I OBS=&amp;amp;I);
  call symput('policy_no',policyNumber);  
  run;         
 data aa;                                            
  set JSON.A;                                
  if policy &amp;gt; &amp;amp;policy_no then  DO; STOP; END;       
  else if policy = &amp;amp;policy_no then do; 
  N + 1;drop N;                      
  id = cats('&amp;amp;policy_id',_N_);
  output; end; 
  run;  
  data bb;                                             
  set JSON.b;                                  
  if policy &amp;gt; &amp;amp;policy_no then  DO; STOP; END;       
  else if policy = &amp;amp;policy_no then do;
  N + 1;drop N;                      
  id = cats('&amp;amp;policy_id',_N_);
  output; end; 
 run; 
 data cc;                                                 
 set json.c3;                                      
   if policyNumber &amp;gt; &amp;amp;policy_no then  DO; STOP; END;       
   else if policyNumber = &amp;amp;policy_no then do;
  N + 1;drop N;                      
 id = cats('&amp;amp;policy_id',_N_);
 output; end; 
 run; 
%macro output_json(table1,table2,table3,table4);     
 PROC JSON OUT= dat3                                   
       NOPRETTY NOSASTAGS  FMTDATETIME FMTNUMERIC; 
write open object;                                 
   write values 'kkkkkk';                          
   EXPORT &amp;amp;table1;                                 
      write values 'jjjjjj';              
      write open array;                            
      EXPORT &amp;amp;table2;                              
      write close;                                 
      write values 'xxxxx          ';              
      write open array;                            
      EXPORT &amp;amp;table3;                              
      write close;                                 
      write values 'yyyyyy';     
      write open array;                            
      EXPORT &amp;amp;table4;                              
      write close;                                 
                                          &lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;SAS LOG&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;FIRST POLICY or FIRST OBSERVATION&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;( for 1000 obesrvations it is taking ~10 minutes). We have to do this for more than 1Bn.&lt;/P&gt;&lt;P&gt;NOTE: There were 9 observations read from the data set PHOENIX.POLCOMP3.&lt;BR /&gt;NOTE: The data set WORK.PCOMP1 has 8 observations and 25 variables.&lt;BR /&gt;NOTE: The DATA statement used the following resources:&lt;BR /&gt;CPU time - 00:00:00.00&lt;BR /&gt;Elapsed time - 00:00:00.03&lt;BR /&gt;EXCP count - 94&lt;BR /&gt;Task memory - 5291K (156K data, 5135K program)&lt;BR /&gt;Total memory - 29582K (7840K data, 21742K program)&lt;BR /&gt;Timestamp - 10/11/2020 10:51:06 PM&lt;BR /&gt;NOTE: The address space has used a maximum of 924K below the line and 37904K abo&lt;/P&gt;&lt;P&gt;NOTE: There were 9 observations read from the data set PHOENIX.AGENT3.&lt;BR /&gt;NOTE: The data set WORK.AGENT has 8 observations and 36 variables.&lt;BR /&gt;NOTE: The DATA statement used the following resources:&lt;BR /&gt;CPU time - 00:00:00.00&lt;BR /&gt;Elapsed time - 00:00:00.02&lt;BR /&gt;EXCP count - 95&lt;BR /&gt;Task memory - 5296K (161K data, 5135K program)&lt;BR /&gt;Total memory - 29582K (7840K data, 21742K program)&lt;BR /&gt;Timestamp - 10/11/2020 10:51:06 PM&lt;BR /&gt;NOTE: The address space has used a maximum of 924K below the line and 37904K abo&lt;/P&gt;&lt;P&gt;NOTE: There were 30 observations read from the data set PHOENIX.CPYMENT3.&lt;BR /&gt;NOTE: The data set WORK.CPAYMNT has 29 observations and 19 variables.&lt;BR /&gt;NOTE: The DATA statement used the following resources:&lt;BR /&gt;CPU time - 00:00:00.00&lt;BR /&gt;Elapsed time - 00:00:00.03&lt;BR /&gt;EXCP count - 94&lt;BR /&gt;Task memory - 5291K (156K data, 5135K program)&lt;BR /&gt;Total memory - 29582K (7840K data, 21742K program)&lt;BR /&gt;Timestamp - 10/11/2020 10:51:06 PM&lt;BR /&gt;NOTE: The address space has used a maximum of 924K below the line and 37904K abo&lt;/P&gt;&lt;P&gt;NOTE: The PROCEDURE JSON used the following resources:&lt;BR /&gt;6 The SAS System&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;  CPU     time -         00:00:00.00                                        
  Elapsed time -         00:00:00.01                                        
  EXCP count   - 21                                                         
  Task  memory - 3800K (91K data, 3709K program)                            
  Total memory - 29648K (7840K data, 21808K program)                        
  Timestamp    - 10/11/2020 10:51:06 PM                                     &lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;NOTE: The address space has used a maximum of 924K below the line and 37904K abo&lt;/P&gt;</description>
      <pubDate>Mon, 12 Oct 2020 11:12:20 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690929#M210221</guid>
      <dc:creator>Sasnewuser222</dc:creator>
      <dc:date>2020-10-12T11:12:20Z</dc:date>
    </item>
    <item>
      <title>Re: SAS taking lot of time reading observations JSON</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690943#M210231</link>
      <description>&lt;P&gt;The log you posted took less than a second to run through, the individual steps finish in a few hundredths of seconds. Your issue must lie somewhere else.&lt;/P&gt;</description>
      <pubDate>Mon, 12 Oct 2020 12:22:41 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690943#M210231</guid>
      <dc:creator>Kurt_Bremser</dc:creator>
      <dc:date>2020-10-12T12:22:41Z</dc:date>
    </item>
    <item>
      <title>Re: SAS taking lot of time reading observations JSON</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690947#M210233</link>
      <description>&lt;P&gt;If all this is to make counts for policy numbers, that can be done in SQL for all policy numbers in one step per dataset.&lt;/P&gt;
&lt;P&gt;You can then join the results and subset with a WHERE, if needed.&lt;/P&gt;</description>
      <pubDate>Mon, 12 Oct 2020 12:27:34 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690947#M210233</guid>
      <dc:creator>Kurt_Bremser</dc:creator>
      <dc:date>2020-10-12T12:27:34Z</dc:date>
    </item>
    <item>
      <title>Re: SAS taking lot of time reading observations JSON</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690986#M210258</link>
      <description>Thanks Sir. I am running this code in MF, though the code is just a sample . When I profiled in mainframe profiler CA-MATUNER, I am seeing 50% of time spent in Voluntary wait. I am not able to identify where the wait is spent. ( For 1000 observations it is taking ~3 Mins of wall clock time. Just for SAS datasets. I have total 10 SAS datasets. In which it is taking ~15 MIns of wall clock time )</description>
      <pubDate>Mon, 12 Oct 2020 14:29:10 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690986#M210258</guid>
      <dc:creator>Sasnewuser222</dc:creator>
      <dc:date>2020-10-12T14:29:10Z</dc:date>
    </item>
    <item>
      <title>Re: SAS taking lot of time reading observations JSON</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690999#M210264</link>
      <description>The &amp;amp;COUNT is to limit the loop. Basically I trying to concatenate observations from different SAS datasets based on policy, but I am adding some sequence number before concatenation. hence I am reading very SAS dataset in the loop, write them in temp datasets, and finally using PROC JSON writing as a single document. this multiple DATA step is taking lot of time</description>
      <pubDate>Mon, 12 Oct 2020 15:40:33 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/690999#M210264</guid>
      <dc:creator>Sasnewuser222</dc:creator>
      <dc:date>2020-10-12T15:40:33Z</dc:date>
    </item>
    <item>
      <title>Re: SAS taking lot of time reading observations JSON</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/691182#M210335</link>
      <description>&lt;P&gt;By "concatenate" do you mean that you stack data from different datasets, interleaving by policy_no, and create an indicator from where the observation is? If yes, this can be done in a single data step for all policy_no.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The code as such looks much too complicated, and I am not surprised that the setup of so many individual steps takes so long.&lt;/P&gt;
&lt;P&gt;Next, I would not do such work directly from JSON libraries. Instead load each needed dataset as is from JSON to WORK first, and then proceed from there. Similarly, write the accumulated output &lt;EM&gt;onc&lt;/EM&gt;e at the end of all processing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you provide examples for the incoming datasets and the wanted result from that (for several policy_no), I can help you with improved code.&lt;/P&gt;</description>
      <pubDate>Tue, 13 Oct 2020 09:58:56 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/691182#M210335</guid>
      <dc:creator>Kurt_Bremser</dc:creator>
      <dc:date>2020-10-13T09:58:56Z</dc:date>
    </item>
    <item>
      <title>Re: SAS taking lot of time reading observations JSON</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/691202#M210344</link>
      <description>Thanks, Sir, Below is the sample input and desired output I want to achieve by the above code example.&lt;BR /&gt;Input&lt;BR /&gt;SAS DATSET1&lt;BR /&gt;POLCIY Uniue ID&lt;BR /&gt;A 10&lt;BR /&gt;B 20&lt;BR /&gt;C 30&lt;BR /&gt;D 40&lt;BR /&gt;E 50&lt;BR /&gt;&lt;BR /&gt;SAS DATASET2&lt;BR /&gt;POLCICY Unique ID OTHER DETAILS&lt;BR /&gt;A -blank- XXXXXX&lt;BR /&gt;A -blank- XXXXXX&lt;BR /&gt;A -blank- XXXXXX&lt;BR /&gt;B -blank- XXXXXX&lt;BR /&gt;&lt;BR /&gt;SAS DATASET3&lt;BR /&gt;POLCICY Unique ID OTHER DETAILS3&lt;BR /&gt;A -blank- XXXXXX&lt;BR /&gt;A -blank- XXXXXX&lt;BR /&gt;B -blank- XXXXXX&lt;BR /&gt;Desired Output&lt;BR /&gt;&lt;BR /&gt;Stage 1 &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt; First i have to match policy and add UniueID and matching sequence to the SAS dataset2 and SAS dataset3)&lt;BR /&gt;The output will be.&lt;BR /&gt;&lt;BR /&gt;POLCIY Uniue ID&lt;BR /&gt;A 10&lt;BR /&gt;B 20&lt;BR /&gt;C 30&lt;BR /&gt;D 40&lt;BR /&gt;E 50&lt;BR /&gt;&lt;BR /&gt;SAS DATASET2&lt;BR /&gt;POLCICY Unique ID OTHER DETAILS&lt;BR /&gt;A 10_1 XXXXXX&lt;BR /&gt;A 10_2 XXXXXX&lt;BR /&gt;A 10_3 XXXXXX&lt;BR /&gt;B 20_1 XXXXXX&lt;BR /&gt;&lt;BR /&gt;SAS DATASET3&lt;BR /&gt;POLCICY Unique ID OTHER DETAILS3&lt;BR /&gt;A 10_1 XXXXXX&lt;BR /&gt;A 10_2 XXXXXX&lt;BR /&gt;B 20_1 XXXXXX&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Stage 2:&lt;BR /&gt;&lt;BR /&gt;Create JSON document/container&lt;BR /&gt;&lt;BR /&gt;PROC JSON&lt;BR /&gt;e.g&lt;BR /&gt;&lt;BR /&gt;1) POLICY A 10[A 10_1 XXXXXX,A 10_2 XXXXXX,A 10_3 XXXXXX][A 10_1 XXXXXX,A 10_2 XXXXXX]&lt;BR /&gt;2) POLICY B 20[B 20_1 XXXXXX, B 20_1 XXXXXX]&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 13 Oct 2020 11:16:19 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/691202#M210344</guid>
      <dc:creator>Sasnewuser222</dc:creator>
      <dc:date>2020-10-13T11:16:19Z</dc:date>
    </item>
    <item>
      <title>Re: SAS taking lot of time reading observations JSON</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/691222#M210355</link>
      <description>&lt;P&gt;See this:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;data dataset1;
input POLICY $ Unique_ID $;
datalines;
A 10
B 20
C 30
D 40
E 50
;

data dataset2;
input POLICY $ Unique_ID $ OTHER_DETAILS $;
datalines;
A -blank- XXXXXX
A -blank- XXXXXX
A -blank- XXXXXX
B -blank- XXXXXX
;

data dataset3;
input POLICY $ Unique_ID $ OTHER_DETAILS3 $;
datalines;
A -blank- XXXXXX
A -blank- XXXXXX
B -blank- XXXXXX
;

data ds2_expanded;
set dataset2;
by policy;
if _n_ = 1
then do;
  declare hash ds1 (dataset:"dataset1");
  ds1.definekey("policy");
  ds1.definedata("unique_id");
  ds1.definedone();
end;
if first.policy
then count = 1;
else count + 1;
if ds1.find() = 0 then unique_id = catx('_',unique_id,count);
drop count;
run;

data ds3_expanded;
set dataset3;
by policy;
if _n_ = 1
then do;
  declare hash ds1 (dataset:"dataset1");
  ds1.definekey("policy");
  ds1.definedata("unique_id");
  ds1.definedone();
end;
if first.policy
then count = 1;
else count + 1;
if ds1.find() = 0 then unique_id = catx('_',unique_id,count);
drop count;
run;

data final;
set
  ds2_expanded
  ds3_expanded (rename=(other_details3=other_details))
;
by policy;
run;&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;The final dataset has all the content, you need to build the JSON "lines" by concatenating again with BY policy.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The advantage is that you do not need to process single policies separately.&lt;/P&gt;
&lt;P&gt;If memory constraints on the mainframe make the use of the hash object impossible, these steps can be done with a data step MERGE; the advantage of the hash is that your dataset1 does not need to be sorted.&lt;/P&gt;
&lt;P&gt;The other datasets need to be sorted by policy to make BY group processing possible.&lt;/P&gt;</description>
      <pubDate>Tue, 13 Oct 2020 12:11:40 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/691222#M210355</guid>
      <dc:creator>Kurt_Bremser</dc:creator>
      <dc:date>2020-10-13T12:11:40Z</dc:date>
    </item>
    <item>
      <title>Re: SAS taking lot of time reading observations JSON</title>
      <link>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/691308#M210375</link>
      <description>Thanks much KurtBremser. I will test this and reply. Thanks much for your time</description>
      <pubDate>Tue, 13 Oct 2020 15:22:12 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/SAS-taking-lot-of-time-reading-observations-JSON/m-p/691308#M210375</guid>
      <dc:creator>Sasnewuser222</dc:creator>
      <dc:date>2020-10-13T15:22:12Z</dc:date>
    </item>
  </channel>
</rss>

