BookmarkSubscribeRSS Feed
Quentin
Super User

Hi,

When using EG connected to a server, I'm assuming the server writes the SAS log to a temporary file or catalog somewhere, before sending it back to EG after the submission has completed.  Is there a way to find out where the log text is stored, so that it could be used for automated log parsing / log scanning ?  I looked in dictionary.extfiles, but couldn't see it.

Currently if I want to submit EG code and have it parse its own log, I use proc printto to send the log to a file, then parse it, then a proc printo reset and data null to send the file back to the default log location.  It works, but feels klunky.  If I were able to simply access the current log, it would feel cleaner.   On PC SAS (DM and batch) and simple server batch submits, I don't need to redirect the log at all, so in the code all you see at the end is a call to %logparse.

And yes, the EG Log Summary is great, but there are advantages to customized log scanning that can be applied across clients.

Pseudocode for my current work-around approach:

*Redirect log;

filename logfile "%sysfunc(pathname(work))/mylog.log";

proc printto log=logfile new;

run;

*main processing here;

*Parse the log;

*on Linux server, can read the log file even when it is still open for writing, on windows server had to close proc printto first;

%logparse(log=logfile)

*Reset proc printto and copy the log file back to default log location;

proc printto log=log;

run;

data _null_;

  infile logfile;

  input;

  putlog _infile_;

run;

Thanks,

--Q.

The Boston Area SAS Users Group is hosting free webinars!
Next up: Joe Madden & Joseph Henry present Putting Power into the Hands of the Programmer with SAS Viya Workbench on Wednesday Nov 6.
Register now at https://www.basug.org/events.
12 REPLIES 12
LeRoyBessler
Obsidian | Level 7

Sounds like you want to scan and parse the SAS log with automation while the SAS program is running.

As to the whereabouts of the SAS log on a server, I one-time stumbled across it, but can't remember where. It was encoded, not simply parseable, not like the lines of text that come back at the end of the run. I don't believe that it is stored in any directly associated SAS libraries or files of the EG session that are visible/accessible to the EG user.

With PROC PRINTTO, while Windows will not allow you to work with the log in another SAS program while the PRINTTO has the log copy allocated for output, it IS possible to open the PRINTTO file with, say, NotePad, and BROWSE the log in the NotePad display window. Not automation, but a useful way to see what has happened so far.

In any case, it would be nice to be able to use OPTIONS LOGPARM='WRITE=IMMEDIATE';

but you can't use it an EG program.

You need it imbedded in the server config file, OR, since there is some code that the BI or EBI environment runs BEFORE launching your program, perhaps the option could be imbedded there and take effect.

But also see https://communities.sas.com/message/134487#134487 where Chris Hemedinger MIGHT be posing what is the solution to your problem. I myself have not explored his information.

I'm content to do automated log parsing when the EG or StandAlone PC SAS program has FINISHED its processing.

Whether wanting the simple option of browsing the SAS log while the EG program is running, or the capability (if possible) to read that active SAS log with another program, you presumably would want to see the log contents unbuffered.

Quentin
Super User

Thanks sorry to hear that the server copy of the log you found was not easily parseable.  Agree, with this approach it makes sense to have LOGPARM set to immediate write.  It's too bad this option can only be specified at config/startup.  In addition I would really like the ALTLOG option to be executable during a session.  That would make my log parsing life much easier.  In fact, maybe both together:

  options altlog="...." (write=immediate) ;

My desire for log parsing within the SAS code itself is to have a single approach that can be used across environments.  While much of my development work is done now in EG, the final code might run as a DI job, or stored process server, or batch job, etc.

In particular for stored processes developers, I think the mindset that SAS logs cannot be ignored is important.  SAS programmers usually grow up being taught the importance of always reviewing the SAS log, because in SAS, just getting results back does not mean the code succeeded.  Many people are taught to look at the log *before* looking at the results.  I try to keep that mentality when developing stored processes.  And I often err on the side of having a stored process return an error message about an uninitialized variable, rather than return questionable results.

The Boston Area SAS Users Group is hosting free webinars!
Next up: Joe Madden & Joseph Henry present Putting Power into the Hands of the Programmer with SAS Viya Workbench on Wednesday Nov 6.
Register now at https://www.basug.org/events.
jakarman
Barite | Level 11

Quentin , The arm logging see: https://en.wikipedia.org/wiki/Application_Response_Measurement is an open standard for the purpose you are describing. SAS is one of the few adapters of that. Why build something by your own when the movement is that other direction,

---->-- ja karman --<-----
Quentin
Super User

Jaap, are you saying that ARM logging is (or can be) a replacement for the traditional SAS log?  The SAS log provides (somewhat) detailed information at the step (or even statement) level, which SAS developers have relied on for years.  Automated log parsing allows developer to detect errors/warnings/bad notes appearing in the log, and decide how to handle them.  And of course developers can customize log scanning and error handling at the level of a job/program/sub-module/etc.

In my brief reading about ARM logging, it seems complementary to the SAS log, not a replacement. From this page: http://support.sas.com/rnd/scalability/tools/arm/arm.html and your Wikipedia link I see ARM logging as focused on analyzing performance and performance bottlenecks.

Okay, it just occurred to me that perhaps when I wrote %logparse() you were thinking I was referring to the 'official' %logparse macro SAS has made available which extracts performance data from the log.  http://support.sas.com/kb/34/301.html .  I could see how ARM logging is a replacement for that %logparse macro, because both gather performance data.

But I my own %logparse log-scanning macro is not about gathering performance data, instead it is focused on detecting errors/warnings/bad notes in the log and sending appropriate notifications.  I don't think ARM logging is a replacement for that task, is it?

The Boston Area SAS Users Group is hosting free webinars!
Next up: Joe Madden & Joseph Henry present Putting Power into the Hands of the Programmer with SAS Viya Workbench on Wednesday Nov 6.
Register now at https://www.basug.org/events.
jakarman
Barite | Level 11

I would try to forget to focus on traditional very ancient logging features. Sometimes they still are working sometimes you are hurting yourself.


The new logging feautures are build around:

- http://support.sas.com/documentation/cdl/en/armref/64756/HTML/default/viewer.htm#p1ok36twyl2u3qn1aj1...

  That is where the APM tool the eventmanager are build on. The configuration of this has options to set alternate logs having defined events even a complete copy of the usual log.

- The ODS part delivering output is abandoning the old classic text single based file that is based on with "proc printto". That is something conflicting.


As you are mentioning a Linux log. Look at the options for parallel processing. It is the next thing to think about organizing your own work in more independent smaller pieces. I know it is very difficult to change patterns of behavior. 

---->-- ja karman --<-----
Quentin
Super User

Thanks I will take a look into ARM logging to see how easy it is to get a copy of the usual log.  That's just want I want.

As for changing focus from the ancient logging features, I'm not likely to do that.  I still see the traditional SAS log as central to SAS programming.  And I see SAS developing enhanced tools for log reviews in modern clients (EG Log Summary, similar in DI studio) and the SAS log is supported across SAS EBI.  I think it's safe to say every time SAS code is executed there is a SAS log produced.  Even if the developer has chosen to turn off display/return of the log, I suspect it still exists within the constructs of SAS.  I don't think that will go away any time soon, and I think it will always contain important information.

Of course non-code solutions such as SAS VA are another matter.

The Boston Area SAS Users Group is hosting free webinars!
Next up: Joe Madden & Joseph Henry present Putting Power into the Hands of the Programmer with SAS Viya Workbench on Wednesday Nov 6.
Register now at https://www.basug.org/events.
Kurt_Bremser
Super User

The logging of a workspace server can be influenced by changing the logconfig.xml in the WS's configuration directory, and linking to it in the corresponding sasv9_usermods.cfg file.

jakarman
Barite | Level 11

Quentin,  look at the several logconfig,xml files that are arround.

A part of the batch server: 

<!-- Batch File Appender, match the output log file name to the input -log statement -->

<appender class="FileAppender" name="BatchFileAppender">

<param name="Append" value="false"/>

<param name="Unique" value="true"/>

<param name="FileNamePattern" value="%S{App.Log}%d{yyyy.MM.dd_hh.mm.ss}.log"/>

<layout>

<param name="ConversionPattern" value="%m"/>

</layout>

</appender>

What you see is the log file is created with this to support the date/time settings altoudh it will be place in the current dir is of that batch proces.

A part of tracing the connec server (similar workspace server):

<!-- Rolling log file with default rollover of midnight -->
   <appender class="RollingFileAppender" name="TimeBasedRollingFile">
      <param name="Append" value="false"/>
      <param name="Unique" value="true"/>
      <param name="ImmediateFlush" value="true"/>     
      <rollingPolicy class="TimeBasedRollingPolicy">
         <param name="FileNamePattern" value=" ***/log/SASEMCS_%d_%S{pid}.log"/>
      </rollingPolicy>
      <layout>
         <param name="HeaderPattern" value="Host: '%S{hostname}', OS: '%S{os_family}', Release: '%S{os_release}', SAS Version: '%S{sup_ver_long2}', Command: '%S{startup_cmd}'"/>
         <param name="ConversionPattern" value="%d %-5p [%t] %u - %m"/>
      </layout>
   </appender>

What you see is essential a altlog function with all dynamic date/time settings and a buffer flush. Following programs is possible that way.

Logparsing can still be done on those. As the questions what to look for is not really determined  ....  Mostly used for looking for the evidence some proces had som weird abend.         

As of your remarks Is see oor different history. Educated as "Systems programmer" the job was wiht a lot of others on the technical area for the process of programmers/testers/expoitation. A real OS has good logging capacitties MVS wiht SMF did come really far. Performance, capacity cost usage secuiryt and errors aside other loggings as in SDSF SYSlog etc.. Collecting and processing is a BI analytics approach.
At Windows and Unix those are missing. So it is undestandable coming form different areas we have different viewpoint.

For exploitation the processes should be clean (notes warnings error) and the RC being the only to be trusted setting (schedulers).
The developers should deliver the code that way. The testers (system-intergration acceptance) validating those reqs. With that no logparsing is necessary.    I have seen all similar kind of activities of building something as devlopers becaus they assume they are needing that although most of it is already availabel in some other way but kept hidden (the untold options).   

---->-- ja karman --<-----
Quentin
Super User

Agree, Jaap, that our different backgrounds are likely the explanation of our differing perspectives.  As a statistical programmer, I learned early on to rely on checking the log to make sure a submission completed without bad log messages.  Originally, that meant manual review of the log, then progressed to automated log scanning after a couple years.  And as you say, there are plenty of SAS programmers who have developed plenty of log scanning scripts (SAS based or non-SAS).

When I moved on to developing utility macros that would be shared with other SAS programmers, I thought about the question of whether such macros should generate return codes.  While some macro developers write extensive error trapping to check input user parameters etc, I took the approach that if a SAS programmer called a macro like %DuplicateCheck(data=sashelp.clas,by=name),  it was okay to have the SAS log return the usual error that SASHELP.CLAS does not exist.  I don't need to add logic to my macro to trap such an error and provide a custom error message or custom return code.  If you use one of my macros, I assume you are a SAS programmer, and that you review the log from your SAS jobs.  (That said, I do write what some error handling code...)

When I moved on to developing web reports / stored processes that would be run by naïve users (on a variety of data, with a range of cleanliness), I took the same attitude I had when I was a statistical programmer.  Since I would never trust results from SAS code that I wrote for myself without looking at the log, I also felt I should I should not ask users to trust results from code that I have written without an [automated] log review.  Note that is true regardless of the amount of error handling code I have written (more than I write in utility macros), and regardless of the amount of validation that has been done. I always look at the log.  And have found the log to be the best indicator of problems in a submission (and often more sensitive than &SYSERR etc, especially with NOTE2ERR option turned on).

Even for my DI-jobs (which are mostly small ETL jobs), I have found log scanning to be more sensitive than the "out-of-the-box" job return codes.  And in that setting, if I want to create my own Job return code, I end the job with a log scan which counts the number of problematic messages found in the log, and use &BadMessageCount as the return code.

So for my whole career, I've avoided the idea (perhaps more common among systems programmers or even data programmers working with pristine data from nicely constrained RDBMSs), that production code and production data could be so "rock solid" that it was not necessary to check the SAS log.

The Boston Area SAS Users Group is hosting free webinars!
Next up: Joe Madden & Joseph Henry present Putting Power into the Hands of the Programmer with SAS Viya Workbench on Wednesday Nov 6.
Register now at https://www.basug.org/events.
jakarman
Barite | Level 11

Quentin, That was my start with the SAS-log any log also. My first job was statistical programmer.  At some point I reverted it all. The reasoning: 
- Any log parsing that can be made will have some failure as it is coding. With changing releases you have to maintain verify and do more things.

- with changing releases error checking and the recognition of those will change by the supplier (SAS) It is hard work (regression testing) to verify all things that went well in the old situation still work in the new one.

The real question is whether the program/code did the intended work correct.  That is (A/B) testing.  The flipping change of view:

Is there something to be found on that correct result, I test that one. And when found the result is incorrect  and the RC is accordingly than correct the RC as it is in error. There are that many possibilities for something to go wrong that checking the correctness is a more easier approach.   For the statistical background it is more easy to calculate the probability everyone has a different birthday the to calculate all options of some people sharing a birthday.   

There are some uncertainties in those let us call them p-values as not everything is verifiable. What is the probability of not seeing the incorrect situation by validating in some way and was is the probability of missing that by analyzing some log. And what is all involved cost with that when needing to correct those.

That is after the development  stage and validating the quality of the code. (D,T,A)  And with that validation of code some checkpoints on being solid enough (no not rock solid) to run as much unattended by others.

  

All kind of algorithmes being supplied (SAS system() the hardware OS etc are assumed to reliable and only when there is some indication it could be in error more investigation is done. 

Still a lot of work on those investigations were many of them are resulting in conclusions impossible to be prevented/recognized by log-parsers.

For example could you imagine seeing a dump with a semop java call in a saslog.       

---->-- ja karman --<-----
Quentin
Super User

Thanks Jaap as always,

Assuming you have patience for continued discussion...  I don't think I will be convinced any time soon to give up my log parsing.  But I could be convinced to put more thought into additional error handling safety nets (combination of return codes as well as the array of available SAS error handling options and statements).

I wonder if you could provide an example of the sort of error handling / return codes you typically generate/use in a simple program?

I have seen suggestions from people who use the run &cancel; approach for error handling.  And conditionally  %let cancel=cancel when they detect an error.  Imagine other folks check for error after every step and do conditional ABORT ABEND statement. Perhaps some folks just set strictest SAS options for error handling they can find (ERRORABEND ERRORBYABEND etc) to force SAS to exit on error and don't code any other error custom handling.

I don't think I would really want my jobs to quit the SAS session on error, because I always like the option of having post processing done at the end of the session to send email alerts/logging/etc.

I looked into what DI studio is doing.  Below is a code extract showing it conditionally sets a global macro variable &job_rc after each step/statement.  Is this the sort of return code you create from an ETL job, constantly checking &syserr or whatever other return code after every step, to see if &job_rc should be updated?  And I suppose could even update &job_rc from within a DATA step if bad data were found that would not set &syserr.  While below approach may work well for generated code, the thought of calling %rcSet() after every step of a hand-written program seems not that appealing.

%global job_rc trans_rc sqlrc ;
%let sysrc = 0;
%let job_rc = 0;
%let trans_rc = 0;
%let sqlrc = 0;
/* initialize syserr to 0 */
data _null_; run;

%macro rcSet(error); 
   %if (&error gt &trans_rc) %then 
      %let trans_rc = &error;
   %if (&error gt &job_rc) %then 
      %let job_rc = &error;
%mend rcSet; 

LIBNAME sashelp BASE "/.../sashelp/";
%rcSet(&syslibrc); 

%let SYSLAST = %nrquote(sashelp.PRDSALE); 

data 
     work.Young 
     work.Old ;
   set &SYSLAST;
   if YEAR   2000 then 
      output work.Young;
run;

%rcSet(&syserr); 

The Boston Area SAS Users Group is hosting free webinars!
Next up: Joe Madden & Joseph Henry present Putting Power into the Hands of the Programmer with SAS Viya Workbench on Wednesday Nov 6.
Register now at https://www.basug.org/events.
jakarman
Barite | Level 11

Quentin, No problem for having patience for continued discussion...  It is learning from all more sides, I will still learn some things.

Challenge 1.  Recognizing a sas session has been killed abended unexpectly.

As any log can get lost premature there is very little trustworthy information. In an interactive session it is a flash the user will complain. In a background process how?    
This one is solved by schedulers (eg LSF). They are having a database registrating the pid and the moment the process was spawned. When it disappears it is an unexpected abend.  Suppose you want to build that yourself (basically the same logic), than:

- use the -initstmt option (sas-code) to store in a sas-dataset that time and pid process was started in a known location as the first record.
  You will know it has started as the initstmt is executed in the wanted sas program context.

- use the -termstmt option (sas-code) to store in that sas-dataset time and and pid and status when the process has ended as the second record in the same sas-dataset

- Have a list of which programs has been started and where that known location is.
You can verify in a regular time interval whether:

   + jobs are running That is one record in the dataset and the process pid still exist in the system
   + jobs are ready as both records are existing. As filling the dataset is done with sas. SAS processing did not stop for some reason.

   + Job abended prematurely. Onely one record exists and the pid is gone.
As the checking is done by a separate SAS process that one can do the mailing for alerts or whatever.
The sysrc status and all others are known so reacting on that is also possible.

Challenge 2
What to check?  There are a lot of basic settings as in SAS(R) 9.4 Language Reference: Concepts, Fifth Edition But these of SQL are to be added SAS(R) 9.4 SQL Procedure User's Guide, Second Edition see also the automatic macro variables that are giving more of those error conditions. As a not-surprise those errorcheckings are not rock solid. Teradata bulkload for example is using creating errortables and will not return some errorcode to proc sql. Instead you have to check the existance of some table.    
For SAS/connect it is the sysinfo SAS(R) 9.4 Macro Language: Reference, Fourth Edition that is in the remote session (%SYSRPUT) to be retrieved. As those approach is involved likely on another system. collecting logfiles is more troublesome.   

Challenge 3

I agree it is very troublesome to do the same as the DI generated code by adding macro code before and each block of sas code. Suppose you are having the goal of generating some dataset (report). You could clear that at the start of code (instruction to the programmer/developer) and check that at the end (instruction programmer developer).  The checking should be done by macro-code as that will still be executed although sas processing has stopped (Syntax processing mode).  
The fool proof approach would be default status by "in error" and have that set to "ok" when that last validation says so.  In reality you have to accept the default of "OK" status and set it to "in error".     
 

Challenge 4

Will this check everything? Of course not. I do not know how to check the program was functionally processed correctly. You can put in wrong bad data and it can run technically correct delivering rubbish. There is your limitation of the level you can do. Protecting against wrong data is a programmer/developers job to be validated by testing.     

         
          

---->-- ja karman --<-----

SAS Innovate 2025: Save the Date

 SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!

Save the date!

SAS Enterprise Guide vs. SAS Studio

What’s the difference between SAS Enterprise Guide and SAS Studio? How are they similar? Just ask SAS’ Danny Modlin.

Find more tutorials on the SAS Users YouTube channel.

SAS Training: Just a Click Away

 Ready to level-up your skills? Choose your own adventure.

Browse our catalog!

Discussion stats
  • 12 replies
  • 9917 views
  • 1 like
  • 4 in conversation