on zOS a couple of years ago I implemented some jcl that preceeded the standard batch invocation of SAS. The bit-on-the-front captured the equivalent of a process-ID (jobname and jes-jobID) with this info it created a tso command to capture a copy of the complete output of the SAS job currently running. The "bit-on-the-front" then launched another job which I refer to as a shadow job. The shadow job would only start once the original had finished;
1 shadow job would invoke tso in batch to invoke that command created earlier which filled a task-specific file with the original job's entire jes output (not only saslog but all other files written to "sysout" which included "job-messages" and "msgclass output" and any "ods listing" written to sysout).
2 it then invoked SAS again to parse that output file, collecting job messages as well as some application specific info from the SASlogs, and
3 in that final SAS step it would then arrange delivery to LAN areas (using the Connect Dierct service approved by the client for SSL delivery from zOS mainframe to servers outside of its security domain), delivering a copy of the SAS job-output files and
4 finally, SAS sent an email to named individuals (or team mailbox) with a links to the files written to the destination area of that SSL delivery.
That final SAS step was able to accomplish a lot of validation/monitoring of job processes (not only SAS).
The scheduling system to ensure the shadow job didn't start until the original had finished was simplistic on zOS (just use the same jobname again). I'm not sure if implementing a dependancy on the original job would be as easy with unix CRON or windows AT or your preferred job-scheduler.
The principle requirements were
1 that job dependancy
2 ability to collect the (required) SAS job outputs
3 extremely reliable (so no trying to be very clever 😉
4 that the original job would finish
We were fortunate to have a platform (zOS) which provided these requirements.
good luck
peter