SAS Viya 2021.1 supports access to the Snowflake database using the SAS Data Connector to Snowflake. Before a user can access the Snowflake database tables, it requires an extensive configuration. Users must install and configure Unix ODBC and Snowflake ODBC drivers at a location available to CAS PODs. The install and configuration steps are a little tedious task considering the Kubernetes cluster and PVC.
This article post discusses the detailed steps to install and configure snowflake ODBC driver in Viya 2021.1 environment.
Snowflake is a data warehouse provided as software-as-service(SaaS). It's built on top of AWS or Azure cloud infrastructure. All aspects of the data warehouse are managed and maintained by the Snowflake organization (vendor). User does not have to worry about hardware or software to select, install, configure or manage. This data warehouse is ideal for an organization that does not want to dedicate resources to maintain in-house servers.
The following picture describes the SAS Viya(CAS) access to the Snowflake database. The environment consists of a snowflake database cluster hosted by Snowflake organizations. The SAS Viya deployed on an AWS EKS cluster with an NFS server. A few specific folders from the NFS server are mounted to the EKS cluster PODs as PVC. The Same PVC is available to CAS PODs for client access configuration.
Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.
Assuming you have an EKS cluster up and running with a Jump server and an NFS Server. The Jump server is meant for accessing the K8S cluster and NFS server. The NFS server and its folder are meant for using a persistence location from K8S PODs. The NFS server folder (e.g. /viya-share/sasviya4aws/config ) is mounted to both the jump server and K8s PODs for client access software.
The following steps describe the installation and configuration of the Snowflake ODBC driver from the jump server to the NFS server folder mounted to Viya CAS PODs.
At the jump server you need GCC ( C-compiler), It can be installed as follows.
gcc --version
sudo apt update
sudo apt install -y build-essential
gcc –version
The Unix ODBC is required to use the Snowflake ODBC driver. In this case, we are getting iODBC as it has better support for text/string data types.
The '/export' folder from the NFS server is mounted into the '/viya-share' folder of the Jump server. The file update and operation executed at Jump server under '/viya_share' folder, updating files at NFS server. The same NFS server's folders are mounted to CAS and Compute PODs.
Download:
mkdir -p /viya-share/sasviya4aws/config/access-clients/odbc
cd /viya-share/sasviya4aws/config/access-clients/odbc
wget https://sourceforge.net/projects/iodbc/files/iodbc/3.52.15/libiodbc-3.52.15.tar.gz
tar -xzvf libiodbc-3.52.15.tar.gz
ls -l
Install:
cd /viya-share/sasviya4aws/config/access-clients/odbc/libiodbc-3.52.15
./configure --prefix=/viya-share/sasviya4aws/config/access-clients/odbc/libiodbc-3.52.15
sudo make
sudo make install
Note : You might see some error while running make install, you can ignore it.
The '/export' folder from the NFS server is mounted into the '/viya-share' folder of the Jump server. The Snowflake ODBC software is being downloaded and installed to the NFS drive.
Download:
cd /viya-share/sasviya4aws/config/access-clients/odbc
wget https://sfc-repo.azure.snowflakecomputing.com/odbc/linux/latest/snowflake_linux_x8664_odbc-2.24.0.tgz
tar -xzvf snowflake_linux_x8664_odbc-2.24.0.tgz
ls -l
Install:
cd /viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc
sudo ./iodbc_setup.sh
File location: /viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/conf/iodbc.snowflake.ini
a. DriverManagerEncoding=UTF-32 --to-- DriverManagerEncoding=UTF-16
b. CABundleFile=/path/to/cacert.pem --to-- CABundleFile=/viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/lib/cacert.pem
c. ODBCInstLib=libiodbcinst.so.2 --to-- ODBCInstLib=/viya-share/sasviya4aws/config/access-clients/odbc/libiodbc-3.52.15/iodbcinst/.libs/libiodbcinst.so.2
File location: /viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/conf/odbcinst.ini
a.Driver=/path/to/your/libSnowflake.so --to-- Driver=/viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/lib/libSnowflake.so
File location: /viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/conf/odbc.ini
[testdemo]
Description=Snowflake DSII
Driver=/viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/lib/libSnowflake.so
Locale=en-US
server=saspartner.snowflakecomputing.com
schema=SASJST
database=USERS_DB
warehouse=USERS_WH
File location: /viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/lib/simba.snowflake.ini
a. DriverManagerEncoding=UTF-32 --to-- DriverManagerEncoding=UTF-16
b. CABundleFile=/path/to/cacert.pem --to-- CABundleFile=/viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/lib/cacert.pem
c. ODBCInstLib=libiodbcinst.so.2 --to-- ODBCInstLib=/viya-share/sasviya4aws/config/access-clients/odbc/libiodbc-3.52.15/iodbcinst/.libs/libiodbcinst.so.2
Now that the '.ini' files configuration is in place at the Jump server, you can test the Snowflake database access by running the 'iodbctest' utility. Export the following variable at $ prompt before running the 'iodbctest' statement.
Export ODBC system variable:
export PATH=$PATH:/home/jumpuser/bin:/viya-share/sasviya4aws/config/access-clients/odbc/libiodbc-3.52.15:/viya-share/sasviya4aws/config/access-clients/odbc/libiodbc-3.52.15/bin:/viya-share/sasviya4aws/config/access-clients/odbc/libiodbc-3.52.15/samples
export LD_LIBRARY_PATH=/viya-share/sasviya4aws/config/access-clients/odbc/libiodbc-3.52.15/iodbcinst/.libs/:/viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/lib
export ODBCSYSINI=/viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/conf
export ODBCINI=/viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/conf/odbc.ini
export ODBCINSTINI=/viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/conf/odbcinst.ini
Test Connection with DSN=’testdemo’ :
iodbctest "DSN=testdemo;UID=SASJST;PWD=XXXXXXXX"
Log:
$ iodbctest "DSN=testdemo;UID=SASJST;PWD=XXXXX"
iODBC Demonstration program
This program shows an interactive SQL processor
Driver Manager: 03.52.1521.0607
Driver: 2.24.0 (Snowflake)
SQL>
SQL>Select * from "USERS_DB"."SASJST"."CARS" WHERE ("Type"='Sedan')
Make |Model |Type |Origin|DriveTrain|MSRP |Invoice |EngineSize |Cylinders |Horsepower |MPG_City |MPG_Highway |Weight |Wheelbase |Length
-------------+----------------------------------------+--------+------+----------+------------------------------------------------------
Mercedes-Benz| CL600 2dr |Sedan |Europe|Rear |128420 |119600 |5.5 |12 |493 |13 |19 |4473 |114 |196
Volvo | S80 T6 4dr |Sedan |Europe|Front |45210 |42573 |2.9 |6 |268 |19 |26 |3653 |110 |190
Volvo | C70 HPT convertible 2dr |Sedan |Europe|Front |42565 |40083 |2.3
At this stage, the snowflake ODBC config files feature the driver path and location to /viya-share/sasviya4aws/config/ folder. This path to the NFS server is valid only from the Jump server as defined in the mount point. The mount point from CAS/Compute PODs to NFS server is different i.e. /mnt/viya-share/config. So, all the config file needs to be updated again with a new path for CAS/Compute PODs. These files are updated from the Jump server.
File location: /viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/conf/iodbc.snowflake.ini
a. ErrorMessagesPath= --to-- ErrorMessagesPath=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/ErrorMessages/
b. LogPath= --to-- LogPath=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/log
c. CABundleFile= --to-- CABundleFile=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/lib/cacert.pem
d. ODBCInstLib= --to-- ODBCInstLib=/mnt/viya-share/config/access-clients/odbc/libiodbc-3.52.15/iodbcinst/.libs/libiodbcinst.so.2
File location: /viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/conf/odbcinst.ini
a.Driver= --to-- Driver=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/lib/libSnowflake.so
File location: /viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/conf/odbc.ini
[testdemo]
Description=Snowflake DSII
Driver=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/lib/libSnowflake.so
Locale=en-US
server=saspartner.snowflakecomputing.com
schema=SASJST
database=USERS_DB
warehouse=USERS_WH
File location: /viya-share/sasviya4aws/config/access-clients/odbc/snowflake_odbc/lib/simba.snowflake.ini
a. ErrorMessagesPath= --to-- ErrorMessagesPath=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/ErrorMessages/
b. LogPath= --to-- LogPath=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/log
c. CABundleFile= --to-- CABundleFile=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/lib/cacert.pem
d. ODBCInstLib= --to-- ODBCInstLib=/mnt/viya-share/config/access-clients/odbc/libiodbc-3.52.15/iodbcinst/.libs/libiodbcinst.so.2
The ODBC system parameter used by CAS and Compute PODs needs to be updated to use the configuration and driver files located at the /mnt/…. folder. The changes are applied by updating the .…/site-config/data-access/sas-access.properties file. After updating the .properties file, it needs to be included in kustomization.yaml and build the manifest and apply to K8s Namespace. After manifest is applied restart the CAS and COmpuet PODs.
Append new lines to sas-access.properties file:
tee -a ~/clouddrive/project/deploy/${EKSCLUSTER}/${NS}/site-config/data-access/sas-access.properties > /dev/null << EOF
########################################
# SAS/ACCESS to ODBC for Snowflake
########################################
ODBCSYSINI=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/conf
ODBCINSTINI=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/conf/odbcinst.ini
ODBCINI=/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/conf/odbc.ini
LD_LIBRARY_PATH=/mnt/viya-share/config/access-clients/odbc/libiodbc-3.52.15/iodbcinst/.libs/:/mnt/viya-share/config/access-clients/odbc/snowflake_odbc/lib
EOF
Build and apply site.yaml:
cd ~/clouddrive/project/deploy/${EKSCLUSTER}/${NS}
kustomize build -o site.yaml
kubectl -n ${NS} apply -f site.yaml
Restart CAS and Compute PODs:
cd ~/clouddrive/project/deploy/${EKSCLUSTER}/${NS}
kubectl -n ${NS} delete pods -l casoperator.sas.com/server=default
kubectl -n ${NS} delete pod --selector='app=sas-compute'
kubectl -n ${NS} delete pod --selector='app=sas-launcher'
Once the Snowflake ODBC configuration is in place and the NFS server folder is mounted to CAS/Compute PODs, the CAS can be loaded from the Snowflake database table using serial and Multi-node data load mechanisms. The following code describes the CAS load from a Snowflake database table using the serial data load method
Code:
CAS mySession SESSOPTS=(CASLIB=casuser TIMEOUT=99 LOCALE="en_US" metrics=true);
%let MYUID=SASJST;
%let MYPWD=XXXXXX;
%let MYDSN=testdemo;
%let MYSCHEMA=SASJST;
caslib snowlib desc='Snowflake Caslib'
dataSource=(srctype='snowflake',
username=&MYUID,
password=&MYPWD,
snowflake_dsn=&MYDSN,
schema=&MYSCHEMA,
);
/*save a CAS table to Snowflake */
proc casutil incaslib="snowlib" outcaslib="snowlib";
load data=sashelp.cars casout="cars" replace;
save casdata="cars" casout="cars_cas" replace;
list files;
quit ;
/* load a Snowflake table to CAS*/
proc casutil incaslib="snowlib" outcaslib="snowlib" ;
load casdata="cars_cas" casout="cars_cas" replace;
list tables;
quit ;
cas mysession terminate;
Log extract :
.....
..............
84
85 caslib snowlib desc='Snowflake Caslib'
86 dataSource=(srctype='snowflake',
87 username=&MYUID,
88 password=&MYPWD,
89 snowflake_dsn=&MYDSN,
90 schema=&MYSCHEMA,
91 );
NOTE: Executing action 'table.addCaslib'.
NOTE: 'SNOWLIB' is now the active caslib.
NOTE: Cloud Analytic Services added the caslib 'SNOWLIB'.
NOTE: Action 'table.addCaslib' used (Total process time):
….
…………….
94 /*save a CAS table to Snowflake */
95 proc casutil incaslib="snowlib" outcaslib="snowlib";
NOTE: The UUID '7244607c-90cd-5146-98cc-43194a67d3d5' is connected using session MYSESSION.
96 load data=sashelp.cars casout="cars" replace;
NOTE: The INCASLIB= option is ignored when using the DATA= option in the LOAD statement.
NOTE: Executing action 'table.addTable'.
NOTE: Action 'table.addTable' used (Total process time):
NOTE: SASHELP.CARS was successfully added to the "SNOWLIB" caslib as "CARS".
97 save casdata="cars" casout="cars_cas" replace;
NOTE: Executing action 'table.save'.
NOTE: Performing serial SaveTable action using SAS Data Connector to Snowflake.
NOTE: Cloud Analytic Services saved the file cars_cas in caslib SNOWLIB.
….
…………….
The following screenshot describes the Snowflake database table loaded into CAS.
Many thanks to my SAS colleaque Andy Bouts for his contribution to the post !
Important Links:
SAS/ACCESS Interface to Snowflake
Snowflake User’s Guide - ODBC Driver
Find more articles from SAS Global Enablement and Learning here.
Join us for SAS Innovate 2025, our biggest and most exciting global event of the year, in Orlando, FL, from May 6-9. Sign up by March 14 for just $795.
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.