Yes, in fact, I do know HOW. I wrote SASPy 🙂
And, I've had to enhance how I was doing that a couple of times over the years to account for SAS idiosyncrasies, like having text between the ERROR and the :, and for cases where logging has been enhanced by the user such that the log contains other formatted content, like timestamps and pids and stuff either before or after the SAS log output - so ERROR isn't even in column 1 of the log I get back. Lots of crazy stuff.
Here's the check: if re.search(r'\nERROR[ \d-]*:', logd): warnings.warn("Noticed 'ERROR:' in LOG, you ought to take a look and see if there was a problem") self._sb.check_error_log = True
Unfortunately, SAS was never completely consistent, like a regular programming language where you call a function and check a return code, regarding programmatically being able to asses the results from submitted code. As others have mentioned, sometimes you can see ERROR: in the log and there wasn't really an error that kept things from working. Sometimes something doesn't work and you don't get ERROR. WARNING can't preclude something not working, or prove it did. The system Macros like SYSERR SYSINFO and all the others, aren't 100% reliable. All that.
So, I try my best to provide these things so you can have the best chance of programmatically assessing your code. The best advice may be to only submit 1 SAS step of code at a time and use whichever method (Error check, Macro, ...) that works best for assessing that step, instead of submitting a whole program full of stuff and trying to assess all of it. Note you can also do things like, after a data step, check to see if the table exists and it has the number of rows expected.
SASPy is used in house for testing and that testing is automated and self assessing, using these various means to assess whether the code passed or failed. I wish there was simply one return code you could check for any SAS code you submit, but there isn't.
Tom
... View more