Hello, we have SAS 9.4 m6 (Linux 7). metadata server, mid-tier server, grid control server and some worker nodes.
We have users allowed to login into all worker nodes and they can run the job (not that good design - I discussed this in one of my other question - thanks). When I login into any specific worker node and if run sas job with sas command (like sas test.sas or bsub test.sas) jobs using same server to run and its not being distributed across worker node (so this is drawback I believe).
do we need to write some sort of os script so that whenever users runs job from any worker node (x, y, z) with (sas or bsub command), job distributed across different worker node? Thank you.
Hello @woo ,
you basically need to enable your SAS jobs to be splited, or to use DS2 data step procedures, to enable parallel executions, then Grid can distribute workload.
Does it answe your question?
Other users have created a script named 'sas' that calls the SAS Grid Manager Command Line Utility (sasgsub) under the covers to submit the job to the grid and wait for it to process the file. The user types 'sas test.sas' and a grid job is created, the program is run in batch in the grid, and the results returned.
This works good unless they know how to get to the real SAS. Once they figure that out they will use it.
'bsub test.sas' will submit a job to run the program 'test.sas'. Not sure what will happen there - probably fail since 'test.sas' is not an executable program.
Thanks Juan/Doug. To be more clear,
so this is the program I have on os side (on one of worker node - linux)
I log in as user and run it with either "sas test.sas" or "bsub test.sas"
when I run job with sas test.sas ("sas" command is pointed to ".../SASFoundation/9.4/sas) from any of worker node (example server5) , job executes on same server only since its running locally (same way bsub test.sas as well). users are aware of this functionality and all users login into all different worker nodes and execute batch jobs that way so jobs are using local resources only and not being distributed across different worker nodes.
my question is, any way to redirect this jobs across different servers? because we can't control users executing job this way.
Hi @woo ,
there is, yes. Please give a look to the first link I posted. signon and rsummit are your friends here.
In addition, have you defined Queues in LSF and Grid Option Sets in the SAS metadata? This is another way to manage the workload of the machines, based on Client that launches the process, timeframes, the Application Server involved, etc.
Thank you for explaining.
we have not setup grid options set or specific queue yet, we might need to do that as well...additionally,
when I, as user run below command, it is asking for metadata password, where I believe there are some missing parameter or file update,
.../Lev1/Applications/SASGridManagerClientUtility/9.4/sasgsub -gridsubmitpgm test.sas
SAS Grid Manager Client Utility Version 9.46 (build date: Nov 7 2018)
Please enter the metadata password:
ERROR: Access denied.
ERROR: Access denied.
also, if run job with bsub command, can't find the log.
Job <826> is submitted to default queue <normal>.
$bhist -l 826
Job <826>, User <xyz>, Project <default>, Command <test.sas>
Thu Mar 21 16:39:22: Submitted from host <hostname.company.com>, to Queue <n
ormal>, CWD <$HOME>;
Thu Mar 21 16:39:23: Dispatched 1 Task(s) on Host(s) <hostname.company.com>,
Allocated 1 Slot(s) on Host(s) <hostname.com>,
Effective RES_REQ <select[type == local] order[r15s:pg] >
Thu Mar 21 16:39:23: Starting (Pid 214681);
Thu Mar 21 16:39:23: Running with execution home </home/userid>, Execution CWD
</home/userid>, Execution Pid <12345>;
Thu Mar 21 16:39:23: Exited with exit code 127. The CPU time used is 0.1 seconds;
Thu Mar 21 16:39:23: Completed <exit>;
Summary of time in seconds spent in various states by Thu Mar 21 16:39:23
PEND PSUSP RUN USUSP SSUSP UNKWN TOTAL
1 0 0 0 0 0 1
You are running into 2 problems:
Or you can just use SASGSUB which knows how to run SAS programs on the grid and will transport the SAS program to and from the grid. You would run "sasgsub -gridsubmitpgm test.sas"
If SASGSUB is installed, it is located in <config>/<LevX>/Applications/SASGridManagerClientUtility/9.4/sasgsub
have a look at .../Lev1/Applications/SASGridManagerClientUtility/9.4/sasgsub.cfg.
The SASGSUB is asking for the password for th metadata user defined in this file (it was defined during installation process).
If you want to use another user, you have to specify more SASGSUB options, e.g. METAUSER, METAPASS.
It is a little bit tricky to get the grid starting, also you should have a look on the config files of LSF. If you haven't definded hostgroups, queues and the usage of the host's of LSF then the Jobs can't be distributed (because LSF maybe don't know that it could distribute to other Hosts).
If you have a look to the LSF config without access to the config files you can use LSF commands (prerequisite is that the profile.lsf is sourced in the session).
- bhosts -w >> shows the host's available
- bmgroup -rw >> shows defined hostgroups
- bqueues -l normal >> shows the config of the queue "normal"
The SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment.
Learn how to install the SAS Viya CLI and a few commands you may find useful in this video by SAS’ Darrell Barton.
Find more tutorials on the SAS Users YouTube channel.