BookmarkSubscribeRSS Feed
Ronein
Onyx | Level 15

Hello

I have a sas program that run automatically every day.

In case that have warning or error messages then I want to save the Log file and attach it to email and send it to me.

What is the way to do it?

Note that I want to send the email +attach log file only in case of error or warning

 

11 REPLIES 11
Kurt_Bremser
Super User

Let the scheduler handle this.

 

I did it this way:

Use a shell script to run your program. In the script, define the location of the log file. If the SAS program ends with a non-zero return code, stream the log file to standard output. The scheduler catches the output and includes it in the mail.

I also kept a dataset recording all program runs having links to the respective logs; from this I built a website which allowed "live" viewing of the logs by simply refreshing in the browser.

Quentin
Super User

I would look on lexjansen.com for papers with examples.

 

My approach:

  1. SAS job starts by using PROC PRINTTO to send the log to a file.
  2. At the end of the job, scan the the log file to look for any errors, warnings, or bad notes.  Save those messages.  (Note before the step that scans the log, set options obs=max replace NoSyntaxCheck  in order to recover from syntaxcheck mode.  Without that, if SAS has entered sytaxcheck mode your log scan will not run.)
  3. If any bad log messages were found, send an email, with the name of the program, the messages found, link to the log file, etc.

 

I also append the data to a job history table, so I can see the full history of job runs, when they errored, number of obs written, etc.  Also I run a nightly job that checks to make sure all expected jobs have run in the past 24 hours. 

 

If you start with getting the log scanning working, then sending the email is pretty straight forward.

 

So you could start by looking on lexjansen.com for log scanning papers, then look for emailing papers.

 

The other option is to use your scheduler software, and pick up job return codes.  The only down side of that is that bad log notes may not trigger a non-zero return code.  But there are system options that allow you to turn most bad log notes into warnings or errors, so this may not be much of an issue.

Kurt_Bremser
Super User
I prefer to have the log scan and the mail creation in a separate process. Otherwise, if a serious problem arises in the SAS program (e.g. out of memory exception), you‘ll never be notified.
So I did the log scan in our sasbatch.sh with OS means (grep) and set an „artificial“ exit code, which would then make the scheduler do the alerts.
Quentin
Super User

If I had the shell scripting skills, I might use a similar approach.  

 

For errors that might effectively kill the session, I wouldn't get an email from my job.  But I also have a separate nightly summary job that looks to see that all expected jobs completed successfully.  If one of my jobs doesn't succeed, the summary job will detect it.  The only down side is I won't be notified immediately.

Patrick
Opal | Level 21

@Kurt_Bremser Why did you have to scan the SAS log and couldn't just check for the exit code of the process?

@Ronein Please provide more detail how you run your daily job? Are you using a scheduler or are you just submitting the job every day? Which environment are you using (OS and potentially scheduler), is it a SAS grid or a single machine, ideally share your batch command with us.

 

Kurt_Bremser
Super User

@Patrick The log scan originated from a peculiar issue we encountered when we FTP'd files from the MVS mainframe. Sometimes the download would be incomplete, but without a standard FTP error message, instead we'd get a proprietary response (in the 600 range IIRC) from the FTP server which SAS would include in the log, but without issuing a WARNING or ERROR.

It seemed to be that, although the previous job creating the file had successfully finished, MVS was still doing some "housekeeping" with its catalog when we tried to read the file.

Anyway, in such a case I would rerun the SAS program and keep a counter for the repeats. If that exceeded a certain threshold, I would exit the script with a dedicated exit code so the operators immediately knew what to look for. In RL, we never got that far. Usually the issue was "fixed" on the MVS side with the first rerun.

There might even have been a SAS option for this situation, but since I know quite a bit about UNIX shell scripting, that was the easier way to go. As of now, the issue is moot as we haven't been using FTP for a long time. The "mainframe" now runs as some kind of emulator on a Linux instance, so ssh is available OOTB. Which opened another can of worms, because the SAS implementation of SFTP is (was?) not reliable at all. Ask if you want to know more.

 

Once I had the log scan established, it was rather easy to extend it for other issues SAS would not catch on its own.

Patrick
Opal | Level 21

@Kurt_Bremser @Tom 
Thank you both for your explanations. 

So if SAS doesn't catch all errors and return them as error condition to the shell then one can't trust any scheduled process flow (lsf, airflow, ...) where child nodes only execute if all parent nodes executed without error.

Do I get this right? And if yes how are you scheduling jobs? 

Tom
Super User Tom
Super User

It is not really just a SAS issue. A program or job step done in any language can fail in ways that the language itself cannot catch.   So you should treat the SAS steps the same way you would treat steps done in other languages.  If you can think of a test that can be done after the step to confirm that it was actually successful then build that into the flow.  If the step is supposed to make a file add a check that the file was actually made. etc.

 

Since the posters on this forum have experience using SAS they can tell you examples of some things you might want to protect against for the steps that are using SAS.

Kurt_Bremser
Super User
SAS will dutifully report everything it considers a WARNING or ERROR in the return code. But there are things which depend on the local definition of „error“, and the admin needs to take care of those. See my previous post about funny FTP messages.
Quentin
Super User

One benefit of log scanning vs using the job's return code to detect problems is it allows each job to customize the definition of failure.

 

For example, almost all of my jobs will treat notes about uninitialized variables, missing values be calculated, and automatic type conversions as errors that I consider job failure.  But some programs are written to allow such messages (unfortunately). If I inherit a job which is designed to use automatic type conversions, I can easily turn off that check in my log scanner, while leaving the check on for other jobs.

Tom
Super User Tom
Super User

The return code from the SAS session is not sufficient to catch all issues.  There are many cases where the SAS code runs without errors but the result is still not correct.  There was just an example this week on this forum where a user had an issue in loading the street level mapping dataset from CSV files because of either an I/O issue or file corruption that did not cause a SAS error, just a note in the log about invalid data being read by the INPUT statement.

 

And also there are cases where SAS issues warnings  that are not problems at all. Such as when your SAS admins or finance groups are late in updating your SAS license.

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

SAS Training: Just a Click Away

 Ready to level-up your skills? Choose your own adventure.

Browse our catalog!

Discussion stats
  • 11 replies
  • 770 views
  • 11 likes
  • 5 in conversation