I have been contacted several times recently on this very same topic by colleagues struggling with the SAS/ACCESS to Hadoop configuration when the Hadoop cluster is secured with Kerberos authentication. There are many reasons for that: Kerberos is a quite complex system and can be configured in various ways, and the Hadoop ecosystem continues to be pretty new and a bit different (compared to traditional DBMS) also keeps evolving rapidly.
SAS/ACCESS to HADOOP configuration and validation in a Kerberized environment task is far from being an automatic or simple deployment step: it should be considered as integration work.
There are already a lot of excellent material in Stuart Rogers’s publications and videos on this topic. They are THE reference material...but of course you need some “offline” time to read it carefully.
The idea of this article is to provide you a short list of actions to perform if you are on the ground, facing the "SAS/ACCESS to HADOOP with Kerberos is not working" problem. One of these actions is also a way to quickly determine if you are in front of a real SAS/ACCESS configuration issue or of a more general System/Kerberos configuration issue.
The five tips presented here are : Note: tips 2) and 3) are not Kerberos specific but can apply in a Kerberos environment. |
The typical ERROR messages that you will see will be when you submit your HADOOP LIBNAME statement in SAS are:
Or:
Or:
It does not really help to identify the real issue, even when you enable SASTRACE options...A good way to get a more detailed message is to run a PROC HADOOP, for example:
Note: Refer to SAS documentation for details on PROC HADOOP.refer to SAS documentation for details on PROC HADOOP. With PROC HADOOP, you will likely have more detail messages helping in the diagnostic.
Or:
Or:
As SAS/ACCESS to Hadoop processing happens in Java, we are looking for “GSS” or “KRB” Java exceptions which will help the System administrator to nail down the problem.
But now let’s imagine the customer is really sure the problem is coming from SAS….Well there is a way to really establish the fact that SAS has nothing do with this connectivity issue.
The best way to show the customer that the problem is not coming from SAS is to reproduce it without using SAS.
When you run a Hadoop Libname you are actually trying to open a JDBC connection to a Hive Server. When it is a Kerberized Hadoop cluster, you are trying to open this JDBC connection with a Kerberos authentication for the Hive Service.
If a tool like beeline is installed on the SAS machine, then it could be used to validate the JDBC connection from the SAS Server as it is very close to what is done by the SAS/ACCESS to HADOOP connection. Otherwhise, if it is not possible to use beeline (for example not available on Solaris), SAS technical support is providing a standalone Java applications to verify Hive and HDFS connectivity to a hadoop cluster.
Follow the link below to download the java application and associated documentation.
These tests are done outside of SAS and are a great way to validate that your Hadoop client configuration and also the Kerberos configuration without engaging the SAS Software responsibility.
Note: As for any other RDBMS the SAS/ACCESS to HADOOP connector relies on the RDBMS client. So if the RDBMS client cannot contact the DBMS Server, then no need event to think starting SAS... In our case the RDBMS client consists of Hadoop JAR files and Hadoop *-site.xml configuration files (core-site, hdfs-site.xml, etc...)
To be able to connect to Hadoop, SAS/ACCESS only needs to know :
In the recent SAS 9.4 releases, you can configure SAS/ACCESS to HADOOP using the SAS Deployment Manager (SDM). The objective of this task of the SDM is to collect the Hadoop jar files and Hadoop configuration files required by SAS/ACCESS to HADOOP.
In some situations it is not possible to meet the requirements to use the SDM. However if you have a shell access to the Hive node, then it stills possible to use the main script called by the SDM in console mode. It will allow to automatically collect the proper jar and *-site.xml files.
The script is called hadooptracer and is part of your SAS/ACCESS to Hadoop deployment (usually available in /SASHadoopConfigurationLibraries/2.1/data).
See below the instruction to run it outside of the SDM in order to collect the the proper jars and "*-site.xml" files for your specific Hadoop cluster.
Copy the hadooptracer.py script to /tmp on the Hive node (from the SAS Server SASHome's directory). Check that strace, wget, and python are installed on the Hive node. If not, install them.
Make the hadooptracer executable:
Run it as the HDFS superuser in your Hadoop environment, usually 'hdfs' or 'hadoop':
Note: In a Kerberized Hadoop env, make sure to kinit the hdfs user (Hadoop superuser) before running hadooptracer.py
When complete, check if /tmp/confs and /tmp/jars folders contains the files required by SAS_HADOOP_JAR_PATH and SAS_HADOOP_CONFIG_PATH. Note: even if you see some errors in the output of the scripts, the required files might have been collected.
UPDATE: The hadooptracer tool is also available for download on the SAS Support FTP server :
ftp.sas.com/techsup/download/blind/access/hadooptracer.zip
So, even before installing SAS, you can grab it use it to collect the JAR and configuration files, then run the "Hive HDFS Connectivity test tool" (discussed in the previous tip) to ensure that your client machine can connect to your Hadoop cluster and access the data.
A common pitfall is the fact that by default Java is only is only able to process AES-128 encryption in Kerberos. If you have AES-256 encrypted Kerberos communications, then the Java Cryptography Extensions (JCE) are required to allow the connection. The instructions below are an extract from the Grid for Hadoop Configuration Guide but are very relevant in any SAS with “Kerberized” hadoop integration work.
When we start the SAS session to open our Hadoop libname, SAS needs to know where it can find a Kerberos TGT (“Ticket Granting ticket”) for the user. SAS relies on an environment variable for that: KRB5CCNAME which points to the correct Kerberos Ticket Cache. Depending on your system authentication configuration (usually PAM), this variable might or might not automatically be set. The instructions below can be used to check the existence of the TGT cache and validate the HADOOP libname starting from the command line on the SAS Server :
Note: If you don’t see a ticket cache you can use “kinit” command to request a TGT cache.
Check if you have the “F” in the flags.
This article is far from being an exclusive list of all potential problems that you might encounter in the field. It does not cover any subtle Kerberos (or Network/System/PAM) configuration that could explain a connectivity failure. However if you are on customer site, it will hopefully help you to check several basic things that, in most of the cases, can explain why the SAS/ACCESS to HADOOP connection is failing. Thanks for reading!
Hi team,
Do you have an external link to ...
Thanks,
Nico.
Hi Nico,
Sorry I forgot to remove this link to an internal resource from my blog.
I'll check if this article has been published externally and come back to you.
Thanks
Raphael
Hi Nico
Altough this specific blog is not public (I removed the link from my blog), you can find the same information covered in 2 "Hadoop with kerberos considerations" papers here : http://support.sas.com/resources/papers/tnote/hadoop.html
Hope that helps.
Thanks
Raphael
Great, thanks Raphael.
SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.