First of all, I must say I was familiar with SAS VA 9.4 in just a only box, not in a HADOOP distributed platform. And I must emphasize I recently started to work in this company and I've found the SAS VA platform as I'm explaining here -- I'm now the RHEL administrator but I neither configured nor installed this SAS VA platform and its RHEL boxes.
My SAS VA 9.4 platform includes 16 RedHat Enterprise Linux 6.4 boxes (16 cores and 256 MiB).
One of these boxes are the "main node" or "core node" where all processes run. In the others ("slaves") just a few HADOOP processes (I guess) are running.
My problem is that there are no startup/stop scripts in either /etc/init or /etc/init.d subdirectories. The only guide made by the SAS staff I can consult shows there should be five stages to start up the SAS VA platform in the main box. They are:
Start SAS servers.
Start LASR monitor.
Start HADOOP HDFS.
Start HADOOP Yarn.
Start HADOOP Map Reduce
The same guide shows the procedure to shut down the main box matches the start procedure in reversed order.
My questions are:
Is that start/stop procedure correct?
Does SAS have start/stop scripts to implement the right start/stop procedure?
What start/stop scripts should be configured in the "slaves" boxes?
I know the sas.servers script is used to start/stop the SAS servers, but I'm new about HADOOP and my questions hinge on how HADOOP changes the SAS start/stop procedure.
Today I've found the slaves.sh script in the $HADOOP_HOME/sbin subdirectory and it is based on SSH. It is used by yarn-daemons.sh and hadoop-daemons.sh to start the remote daemons. This solves my 3rd question.