BookmarkSubscribeRSS Feed
JDamian
Calcite | Level 5

Hi everyone.

 

First of all, I must say I was familiar with SAS VA 9.4 in just a only box, not in a HADOOP distributed platform. And I must emphasize I recently started to work in this company and I've found the SAS VA platform as I'm explaining here -- I'm now the RHEL administrator but I neither configured nor installed this SAS VA platform and its RHEL boxes.

 

My SAS VA 9.4 platform includes 16 RedHat Enterprise Linux 6.4 boxes (16 cores and 256 MiB).

One of these boxes are the "main node" or "core node" where all processes run. In the others ("slaves") just a few HADOOP processes (I guess) are running.

 

My problem is that there are no startup/stop scripts in either /etc/init or /etc/init.d subdirectories. The only guide made by the SAS staff I can consult shows there should be five stages to start up the SAS VA platform in the main box. They are:

 

  1. Start SAS servers.
  2. Start LASR monitor.
  3. Start HADOOP HDFS.
  4. Start HADOOP Yarn.
  5. Start HADOOP Map Reduce

 

The same guide shows the procedure to shut down the main box matches the start procedure in reversed order.

 

My questions are:

  1. Is that start/stop procedure correct?
  2. Does SAS have start/stop scripts to implement the right start/stop procedure?
  3. What start/stop scripts should be configured in the "slaves" boxes?

 

Thank you

3 REPLIES 3
FriedEgg
SAS Employee
These scripts should be generated as part of your installation process. A copy should exist in your LEV directory or you can try to regenerate them

http://support.sas.com/documentation/cdl/en/bisag/68240/HTML/default/viewer.htm#n0crcjp2e0r6fln0z1sg...
JDamian
Calcite | Level 5

Thanks for that info, FriedEgg.

I know the sas.servers script is used to start/stop the SAS servers, but I'm new about HADOOP and my questions hinge on how HADOOP changes the SAS start/stop procedure.

Today I've found the slaves.sh script in the $HADOOP_HOME/sbin subdirectory and it is based on SSH. It is used by yarn-daemons.sh and hadoop-daemons.sh to start the remote daemons. This solves my 3rd question.

FriedEgg
SAS Employee
#!/usr/bin/env bash
set -e
set -o pipefail

HADOOP_PREFIX=/usr/local/hadoop
HADOOP_YARN_HOME=\${HADOOP_PREFIX}
HADOOP_CONF_DIR=${HADOOP_CONF_DIR}
YARN_LOG_DIR=\${HADOOP_YARN_HOME}/logs
YARN_IDENT_STRING=root
HADOOP_MAPRED_IDENT_STRING=root
HADOOP_CONF_DIR=/etc/hadoop/conf

function _fmt ()      {
  color_ok="\x1b[32m"
  color_bad="\x1b[31m"

  color="${color_bad}"
  if [ "${1}" = "info" ]; then
    color="${color_ok}"
  fi

  color_reset="\x1b[0m"
  if [ "${TERM}" != "xterm" ] || [ -t 1 ]; then
    # Don't use colors on pipes or non-recognized terminals
    color=""
    color_reset=""
  fi
  echo -e "$(date -u +"%Y-%m-%d %H:%M:%S UTC") ${color}$(printf "[%4s]" ${1})${color_reset}";
}

function info () { echo "$(_fmt info) ${@}" 1>&2; }

function startHdfs {
info "starting hdfs"
$HADOOP_PREFIX/sbin/start-dfs.sh --config $HADOOP_CONF_DIR
}

function startYarn {
info "starting yarn"
$HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
$HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
$HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
}

startHdfs
startYarn

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

Get Started with SAS Information Catalog in SAS Viya

SAS technical trainer Erin Winters shows you how to explore assets, create new data discovery agents, schedule data discovery agents, and much more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 3 replies
  • 2024 views
  • 0 likes
  • 2 in conversation