BookmarkSubscribeRSS Feed

New Considerations for SAS Grid Manager 9.4 M6

Started ‎07-18-2019 by
Modified ‎12-12-2019 by
Views 4,684

With 9.4 M6, SAS Grid Manager is now offered with a new default grid workload manager: the SAS Workload Orchestrator. The SAS Workload Orchestrator is a fresh approach to give SAS Grid Manager a new way to manage workload distribution. Don’t worry, SAS continues to offer variations of SAS Grid Manager for use with IBM’s Platform software or with Apache Hadoop’s YARN and those products will continue to be supported ongoing into the foreseeable future.

 

SAS Workload Orchestrator works differently than those other grid workload managers. And when deploying the SAS Grid Manager solution with SAS Workload Orchestrator, we have some new considerations to address after the SAS Deployment Wizard has finished its tasks.

 

The following tasks are OPTIONAL. If these tasks are not performed, the SAS Grid Manager will still be operational with all critical functionality in place. So read all the way to the end before attempting any steps described here.

I. Operating the Grid with Python

The SAS Workload Orchestrator service can be operated individually on each grid host using the [SASCONFIGDIR]/Lev1/Grid/sgmg.sh utility script to start, restart, stop, or query status. But for a grid with many compute hosts, having to invoke the Grid/sgmg.sh script on each and every machine can get tiresome.

 

So the SAS Workload Orchestrator also offers two Python scripts in that same Lev1/Grid directory: gridStart.py and gridStop.py. As it turns out most standard Linux deployments probably have the base Python components installed and ready to go. For gridStart.py, that's great. In that case, it works right out of the box and when executed, will automatically establish SSH connections to each of the grid hosts and execute the Grid/sgmg.sh script with the start parameter.

 

However, gridStop.py needs a little more attention at initial deployment. When it runs, it doesn't use SSH to contact the grid hosts to execute Grid/sgmg.sh with the stop parameter. Instead, it uses HTTP to contact the SAS Workload Orchestrator process's RESTful API on the master host and directs it to shut down the grid.

 

In order for gridStop.py to make those HTTP RESTful API calls, it relies on a Python library named "requests". The Python requests library may not be installed - and so you're responsible to do so. There are four commands you can use to install the infrastructure needed to get the requests library:

 

# As root on a CentOS (other Linux/UNIX will be different):
#
# Install the Extended Packages for Enterprise Linux 
yum install -y epel-release
#
# Install the Python-PIP package manager
yum install -y python-pip
#
# Upgrade Python-PIP to the latest version
pip install --upgrade pip
#
# Install the Python requests library
pip install requests

 

This install procedure is only needed on the host machine(s) where you plan to invoke execution of the gridStop.py script - not on every grid host.

 

With the gridStart.py and gridStop.py scripts fully functional, then it's easy to operate the SAS Workload Orchestrator process across many machines at once.

II. Enable SAS Workload Orchestrator to Monitor Local Disk I/O

SAS Workload Orchestrator automatically tracks many aspects of resource utilization on the grid host machines. It monitors the number of processes running, how much CPU and RAM they’re consuming, network availability, disk activity, and much more. All of that information is helpful in determining which host is best suited to run the next grid job.

 

The SAS Workload Orchestrator can also monitor the activity and resource utilization of individual grid jobs as they're running. Some of it is very specific and may be of limited value except in certain circumstances. In order to monitor a few specific job statistics, the SAS Workload Orchestrator needs escalated privileges. That's because grid jobs are run as the userid requesting them - whereas the SAS Workload Orchestrator process is usually running as userid of the SAS Installer account.

Disk I/O and the Principle of Least Privilege

To grant the SAS Workload Orchestrator the specific privileges it needs, the Grid Manager deployment guide directs us to grant file capabilities on the bin/sgmg executable:

 

# As root:
#
# Grant disk I/O monitoring capabilities
setcap CAP_SYS_PTRACE,CAP_DAC_READ_SEARCH+ep /opt/sas/sashome/SASFoundation/9.4/utilities/bin/sgmg

 

Setting file capabilities in this way is a great approach in support of the Principal of Least Privilege. The idea is that we're only granting the privileges which are needed - and nothing more. This security practice is an important design consideration which the SAS Workload Orchestrator can employ in environments which support it.

But wait, file capabilities don't always work

As someone familiar with SAS grid deployments already, you already know that a single deployment of the SAS Compute Tier on a Linux (or UNIX) host can be shared across multiple machines. One install for many hosts. Choosing the correct shared file system for this purpose is an important task - usually involving third-party software and dedicated hardware to achieve the necessary level of service.

 

However, for grid deployments which are not especially sensitive to performance or are otherwise not really mission-critical, like dev/test environments or for proof-on-concept implementations, then plain old NFS has historically been sufficient to the task… until now.

 

Unfortunately, the currently active implementations of the NFS protocol do not convey file capabilities to remote hosts. So if your site is relying on NFS as the shared file system technology to mount a single installation of SAS Compute Tier software across multiple grid hosts, then we must use something other than file capabilities.

 

Disk I/O and Full Root Privileges

If you want SAS Workload Orchestrator to monitor grid jobs' disk I/O statistics - and if file capabilities are not working - then there's an alternative approach: enable setuid on the bin/sgmg executable instead.

 

You're already seen setuid in action for SAS executables like elssrv, objspawn, and sasauth. Those processes run with the ability to use the full set of root privileges… but they just use a fraction of that power. We can do the same with bin/sgmg so that it can see those disk I/O stats:

 

# as root on the host where SASHOME resides
# 
# Make root the owner
chown root:sas /opt/sas/sashome/SASFoundation/9.4/utilities/bin/sgmg
#
# Enable the setuid bit
chmod 4755 /opt/sas/sashome/SASFoundation/9.4/utilities/bin/sgmg

 

Do not implement both file capabilities and setuid on bin/sgmg. That's derpy. Choose the right one for your environment.

Linked Libraries

Establishing the appropriate level of privilege isn't enough. When root-level privileges are enabled on a file (either using file capabilities or setuid), then one feature of Linux is to automatically change the way shared library files are located when running the newly capable bin/sgmg executable. Specifically, the LD_LIBRARY_PATH environment variable is no longer the correct way for SAS to find the library files it needs - especially those for encryption.

 

Instead we must define the path to the required SAS library files using a different approach:

  1. As root on all grid hosts, create a new file /etc/ld.so.conf.d/sgmg.conf with content:

     

    /opt/sas/sashome/Secure/sasexe
    /opt/sas/sashome/SASFoundation/9.4/sasexe
    

     

  2. To pick up that configuration, on all grid hosts execute:

     

    # as root
    # 
    # Notify the OS of changes to the linked library definitions
    ldconfig -v
    

     

The -v option directs the ldconfig utility to provide you with a verbose listing. Near the top of the output you should be able to confirm that *.so files are available from:

  • /opt/sas/sashome/Secure/sasexe
  • /opt/sas/sashome/SASFoundation/9.4/sasexe

Now that bin/sgmg and the operating system are configured to work well together, then SAS Workload Orchestrator has the additional privileges it needs to monitor disk I/O for grid jobs.

Monitoring Disk I/O for Grid Jobs

Let's confirm that the SAS Workload Orchestrator can actually monitor the disk I/O of grid jobs:

  1. Logon to the SAS Workload Orchestrator web interface:
    http://[SWO-MASTER.site.com]:8901/sasgrid/index.html
  2. In the left-hand list of links, select Configuration
  3. In the tabs across the top, select Queues
  4. Click the triangle icon next to the Default queue to expand its list of attributes
  5. Scroll to the Limits section at the bottom of the page
  6. Select MaxIoTotal from the menu and click the + icon
  7. Enter a value such as 9999 (default unit is MB)

     

    At this point, any new jobs submitted to the Default queue will be dispatched with a limit of 9,999 MB of local disk I/O. If that limit is exceeded, then SAS Workload Orchestrator will kill the job.

     

  8. Submit a new job the the grid's Default queue. If the SAS Workspace Server has already been configured for grid-launch, then signing on to a new session of SAS Studio will work.
  9. In the SAS Workload Orchestrator web interface, look in the left-hand list of links, select Jobs
  10. Select the new job in the grid's Default queue and click on its Name value. If using SAS Studio, the name will appear similar to, "Web Infra Platform Services 9.4 - SAS Studio Mid-Tier 3.8_SASApp - Workspace Server_81CD8C78-459C-B843-BEFC-79C79B5E681C"
  11. In the tabs across the top, select Limits
  12. Verify the Current value of the maxIoTotal limit is greater than zero.

     

    I_see_stats.png

The other statistic we enabled SAS Workload Orchestrator to monitor with its escalated privileges that can be selected as a Limit criteria is MaxIoRate. Keep in mind that MaxIoTotal and MaxIoRate are only able to monitor local disk I/O. If the grid job is accessing all files over NFS, then that's a network measurement, not local disk I/O.

 

And finally, if MaxIoTotal and MaxIoRate are not limits that you want to measure in this grid environment, then there's no need to enable escalated privileges (and all it entails) for SAS Workload Orchestrator.

III. Accessing SAS Workload Orchestrator After Failover

SAS Workload Orchestrator runs on every grid host machine. At startup, one host acts as the grid master. If the grid master fails for some reason - machine crashes, process killed, network interrupted, etc. - then the SAS Workload Orchestrator process on another host will take on the role of master.

 

In the section above describing how to monitor disk I/O for grid jobs, did you notice that the url to view the SAS Workload Orchestrator's web interface references the host machine of the current master: http://[SWO-MASTER.site.com]:8901/sasgrid/index.html?

 

If that master host goes offline, then the grid is still functional - but the web interface won't be available at that same url. We need to configure the environment so that one static URL will automatically re-direct to the any SAS Workload Orchestrator master candidate host:

  1. As the SAS Installer on the machine hosting the SAS Web Server, create a new file /[SASCONFIGDIR]/Lev1/Web/WebServer/conf/swo.conf with content:

     

    # Load the watchdog module. This module is required by the health check module below.
    LoadModule watchdog_module "/opt/sas/sashome/SASWebServer/9.4/httpd-2.4/modules/mod_watchdog.so"
    
    # Load the health check module - uncomment LogLevel for troubleshooting. Large logs will result.
    LoadModule proxy_hcheck_module "/opt/sas/sashome/SASWebServer/9.4/httpd-2.4/modules/mod_proxy_hcheck.so"
    #LogLevel proxy_hcheck:TRACE8
    
    # Specify a port for SAS Workload Orchestrator traffic - any free port is acceptable
    Listen 8901
    
    # Configure a virtual host (using the port specified above) to route traffic to SAS Workload Orchstrator.
    <VirtualHost *:8901>
    
    # We use a health check to ensure that only the current master is available. All other hosts will fail 
    # the health check and be disabled. Only the current master will return 401, 303, or 200on GET /sasgrid/index.html.
    # All other hosts will return 301.
    ProxyHCExpr ok401 {%{REQUEST_STATUS} == 401 || %{REQUEST_STATUS} == 303 || %{REQUEST_STATUS} == 200}
    ProxyPass / balancer://SASWorkloadOrchestrator/
    ProxyPassReverse / balancer://SASWorkloadOrchestrator/
    
    # The health checks ensures that all traffic will be routed to the current master. However, we can optimize
    # even further by assigning each balancer member to a balancer member set (where each set contains only a
    # single host). This ensures that Apache will traverse the list of master candidates in the same order as
    # SAS Workload Orchestrator.
    <Proxy balancer://SASWorkloadOrchestrator>
    BalancerMember http://[SWO MASTER CANDIDATE № 0]:8901 lbset=0 hcinterval=5 hcmethod=GET hcuri=/sasgrid/index.html hcexpr=ok401
    BalancerMember http://[SWO MASTER CANDIDATE № 1]:8901 lbset=1 hcinterval=5 hcmethod=GET hcuri=/sasgrid/index.html hcexpr=ok401
    ... ... ...
    BalancerMember http://[SWO MASTER CANDIDATE № n]:8901 lbset=n hcinterval=5 hcmethod=GET hcuri=/sasgrid/index.html hcexpr=ok401
    </Proxy>
    </VirtualHost>
    
    

     

    Notice:
    • Trace-level logging will generate very large log files. While that is helpful when troubleshooting problems, we recommend to keeping it disabled during normal operations. Uncomment the proxy_hcheck lines if needed.
    • We chose ListenPort value of 8901 to match the default port which the SAS Workload Orchestrator processes listen at on their respective hosts - but we don't have to. If port 8901 isn't available on the SAS Web Server's host, then choose any other free port there. The traffic will still route to the Workload Orchestrator at 8901 on its respective host machines.
    • All SAS Workload Orchestrator master candidate hosts must be listed line-by-line at the bottom of the file in the BalancerMember directives, incrementing the value of the lbset parameter for each.
  2. Again as the SAS Installer, edit the /opt/sas/sasconfig/Lev1/Web/WebServer/conf/httpd.conf file and append the following lines:

     

    # Include the SAS Workload Orchestrator master failover load balancing
    include conf/swo.conf
    

     

  3. Then restart the SAS Web Server:

     

    # as the SAS Installer on the SAS Web Server host
    # 
    /[SASCONFIGDIR]/Lev1/Web/WebServer/bin/httpdctl restart
    

     

  4. In SAS Management Console > Plug-ins tab > Application Management folder > right-click on Workload Orchestrator and select Properties. Select the Internal Connection tab. Change the Host Name value to reference the hostname of the SAS Web Server.

With these changes, then the SAS Web Server is configured to act as a reverse proxy to automatically redirect requests for the SAS Workload Orchestrator web interface to any of the master candidate hosts. Further, the SAS software is configured to reference the new reverse proxy instead of the first-deployed grid host.

 

So instead of this original URL to access the SAS Workload Orchestrator web interface:
      http://[SWO-MASTER.site.com]:8901/sasgrid/index.html

 

You will use the reverse-proxy configuration in the SAS Web Server to access the SWO interface:
      http://[SAS-WEB-SERVER.site.com]:8901/sasgrid/index.html

More Information

If you would like to learn more about the new SAS Workload Orchestrator as well as SAS Job flow Scheduler and other new capabilities in SAS Grid Manager at 9.4 M6, refer to the Grid Computing in SAS® 9.4, Fifth Edition documentation.

 

When working with a multi-tiered (or multi-machine) deployment of SAS software, coordination of SAS software services as they run across hosts is important. SAS Technical Support provides the SAS_lsm utility to help manage the operations of SAS software on multiple machines.

Acknowledgements

A special word of thanks to Darwin Driggers for his long-suffering assistance in working through the nuanced aspects of this topic with me; to Doug Haigh for deep insights; and to Scott Parrish and the rest of the grid team for their cogent input as well.

Comments

Hello @RobCollum ,

 

many thanks to you and your team, for this article and the product that lays behind.

 

I am working with one of the first implementations of this new GRID M6 with SAS Provider, in Azure and with Lustre, and I am quite thrilled with all the new current improvements and the improvements to still arrive.

 

I would like to make a couple of mentions here, perhaps a couple of questions as well:

 

  • This Grid is much easier to install and maintain than the LSF grid, except for some hotfixes needed. Looking forward to its future.
  • The SWO failover/LB implementation in the WebServer, kind of fails from time to time, I suppose the worker dies, still to be investigated. If a Load Balancer/Reverse proxy will be implemented for the Web Applications, I found useful to create an additional rule for the SWO. Consider this is a Web interface which is hosted in the SAS Compute nodes.
  • @RobCollum is there any plan to be able to monitor non-local storage performance, such as the Shared Storage (in my case, Lustre)? I think for most of the Grid installations to monitor shared storage will be as important, or perhaps even much more, than to monitor Local storage.
  • At this moment we have live data of the current moment. It would be very nice to have historical data, perhaps in a chart, about all the monitored values. Is it in the roadmap?

 

Thank you,

 

Best regards,

Juan

 

 

 

 

Juan, 

 

Thanks for the insightful questions and observations. 

 

Two items I can respond to:

 

  1. I wrote, "Keep in mind that MaxIoTotal and MaxIoRate are only able to monitor local disk I/O. If the grid job is accessing all files over NFS, then that's a network measurement, not local disk I/O." And then you asked, "is there any plan to be able to monitor non-local storage performance, such as the Shared Storage (in my case, Lustre)?"

    The message I was trying to convey is that I used basic NFS for my shared disk - which relies on network to move data. So it's just not something that can be monitored like a local disk. Of course, there are many techniques which are useful to mount shared file systems, some of which appear to the OS like locally-attached storage.

    That said, your question hits a good point. The resources which SAS Workload Orchestrator can monitor do not currently include network i/o. It can monitor CPU, RAM, swap space, and local disk I/O -- but not networking. I'll forward that on to product management.

  2. You also asked, "It would be very nice to have historical data, perhaps in a chart, about all the monitored values. Is it in the roadmap?"

    Refer to the SAS Environment Manager 2.5: User's Guide, Third Edition documentation for information on how to implement the SAS Environment Manager Service Architecture Framework. For that, the doc explains:

SAS Environment Manager Service Architecture provides functions and capabilities that enable SAS Environment Manager to fit into a service-oriented architecture (SOA). The package implements best practices for resource monitoring, automates and extends the application’s auditing and user monitoring capabilities, and follows industry standards to enable servers to use Application Response Measurement (ARM). These functions enable SAS Environment Manager to function as a key component in providing service-level management in a strategy that is based on the IT Infrastructure Library (ITIL).

 

Best regards,

Rob

Are the existing SASEnvironmentManager/agent-*EEs being used to provide metrics for load balancing? I could not see any references in the documentation relating to the SAS Workload Orchestrator daemon. I am trying to figure out the relationship compared with lim, pim, elim, res and sbatchd

AC,

SAS Environment Manager Agents are deployed to monitor the grid environment - and of course, the new EV Smart Agents are great for that. But they're not what is used to track the best grid worker for the next job. The SAS Workload Orchestrator software takes on the role performed by Platform LSF for the most part and its various instances coordinate to monitor workload as input to the job assignment process.

There aren't really any 1-to-1 analogues for the lim, pen, res, etc. The SWO handles those various tasks on each host. More information for monitoring and managing grid resources can be found in the SAS Grid Manager documentation

Hope this helps,
Rob

 

 

Thanks Rob,

I looked at the docs but should have spent time at the booth in the Quad.
There a few upcoming SAS admin events and I will ask if access to a demo
can be included. It looks like an important part of the architectural
roadmap.

Version history
Last update:
‎12-12-2019 09:53 AM
Updated by:
Contributors

SAS Innovate 2025: Save the Date

 SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!

Save the date!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags