BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
RaviSPR
Obsidian | Level 7

Hi,

 

We are using SAS 9.4 and IBM Process flow manager for scheduling and monitoring SAS jobs.

1) I have created some new jobs on SAS MC Scheduler. But when i checked the same job on Flow manager it is not triggered automatically based on Trgger time. So i have triggered those jobs manually from Flow manager but those are immediately Exitting with EXIT Code 127.

And some codes  are not at all started running though i tried number of times Rerunning manually from Flow manager. If they start also those are exitting with Exit code 127. What would be the reason for this? Here some codes are triggering on time and running successfully as well.

Same issue is repeating on next day. i.e Some code are triggering automatically but throwing Exit code 127 and some codes are not triggered automatically. Can you please help me on this?

 

2) what would be the reason for not genearting the LOG file at particualar location for a particualr code? I can see some other codes are generating Log files successfully. I have checked all the setting on SAS MC scheduler.

 

Thanks

RaviSPR

 

 

1 ACCEPTED SOLUTION

Accepted Solutions
Timmy2383
Lapis Lazuli | Level 10

You should be able to modify the lsb.queues file.  It's likely under <LSF_CONF>/lsbatch/sas_cluster/configdir.

 

In my configuration, I created a "priority" queue and assigned the valid execution hosts (i.e. those machines running your app tier/workspace or batch servers).

 

Here's an example of mine:

 

Begin Queue
QUEUE_NAME=priority
PRIORITY=43
PREEMPTION=PREEMPTIVE
HOSTS=host1 host2 host3
DESCRIPTION=Jobs submitted for this queue are scheduled as urgentjobs. Jobs in this queue can preempt jobs in lower priority queues.
RERUNNABLE=NO
NICE=10
End Queue

 

Just substitute "host1 host2" with the approriate server names for your environment.  I then made sure that any job scheduled with Process Manager is submitted to the "priority" queue.

 

Likewise, you may be able to add the "HOSTS=" field to your Default queue.

View solution in original post

7 REPLIES 7
Timmy2383
Lapis Lazuli | Level 10

Can you check the Default queue settings to see what hosts are selected for execution?  I had a similar problem and had to modify my queues so that only the compute nodes (the ones that would run SASApp servers) were selected.  Otherwise, it was trying to submit to other hosts in my grid (like the metadata server) to run the jobs, which would fail with  a 127 code.

RaviSPR
Obsidian | Level 7
Yes. You are correct. Currently all the jobs are scheduled to Default queue. Do you means, should i request my admin to change the Queue to either normal or Priority queue on SAS MC scheduler??

Thanks
RaviSPR
Anand_V
Ammonite | Level 13

Hi @RaviSPR

 

If there is No SAS Log generated then there should be some issue with the command using which you are trying to execute. Kindly try to run the same using NON LSF system and see if it executes. 

 

When ever LSF is not able to find something supplied in the command (it could be file name as well) 127 error code is generated.

 

Here's sample code for you:

 

bash-4.1$ ls -l neww.sas
ls: cannot access neww.sas: No such file or directory
bash-4.1$ bsub neww.sas
Job <377292> is submitted to default queue <normal>.
bash-4.1$ bjobs -l 377292

Job <377292>, Status <EXIT>, Queue <normal>,
Command <neww.sas>
: Submitted from host , CWD <$HOME>
;
: Started on <>, Execution Home <>, Execution CWD <>;
: Exited with exit code 127. The CPU time used is 0.1 seconds.
: Completed <exit>.

 

Hope this helps.

 

Thanks,

Timmy2383
Lapis Lazuli | Level 10
RaviSPR,

I don't think you necessarily need to change the queue. You need to reconfigure the Default queue so it's only able to submit to the correct hosts.

Do you have RTM installed?
RaviSPR
Obsidian | Level 7
No Timmy. I dont have RTM. I will check with Admin to reconfigure the Default queue.
Timmy2383
Lapis Lazuli | Level 10

You should be able to modify the lsb.queues file.  It's likely under <LSF_CONF>/lsbatch/sas_cluster/configdir.

 

In my configuration, I created a "priority" queue and assigned the valid execution hosts (i.e. those machines running your app tier/workspace or batch servers).

 

Here's an example of mine:

 

Begin Queue
QUEUE_NAME=priority
PRIORITY=43
PREEMPTION=PREEMPTIVE
HOSTS=host1 host2 host3
DESCRIPTION=Jobs submitted for this queue are scheduled as urgentjobs. Jobs in this queue can preempt jobs in lower priority queues.
RERUNNABLE=NO
NICE=10
End Queue

 

Just substitute "host1 host2" with the approriate server names for your environment.  I then made sure that any job scheduled with Process Manager is submitted to the "priority" queue.

 

Likewise, you may be able to add the "HOSTS=" field to your Default queue.

RaviSPR
Obsidian | Level 7

Hi Timmy,

 

Yes. I advised my admin to add 2 more hosts in lsb.queue file as some of our scheduled codes were sending to run on these 2 servers on which no sasappcomm server is installed. And in the command we are passing contains ../sasappcomm1/....sh.

He aggreed to install the sasappcomm server on rest of the 2 hosts.

 

Thanks a lot.

Regards,

Ravi

 

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

CLI in SAS Viya

Learn how to install the SAS Viya CLI and a few commands you may find useful in this video by SAS’ Darrell Barton.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 7 replies
  • 5511 views
  • 3 likes
  • 3 in conversation