ODBC and JDBC offer powerful alternatives for connecting to data sources that don't have a specific SAS/ACCESS engine. In addition, JDBC provides flexibility and simplicity in terms of configuration and programming. Once you have found the right JDBC driver required to access your data source, you just need to make this single jar file available to your compute engine. Then on the code side, you will just add the driver location to your CLASSPATH and possibly the driver class information. A simple URL parameter will allow you to define the type of database, the server, the port and the database all at once.
In SAS Viya 4, the configuration of JDBC is a little bit more complicated since you have to make the JDBC drivers available through a volume mount location in your Kubernetes environment. Nothing insurmountable. SAS Viya 2021.2.4 brings a slight improvement in managing this. You still have to make the drivers available, but they can be automatically loaded in the CLASSPATH if placed appropriately.
With SAS Viya 2021.2.4, if you mount a volume on your Compute and CAS pods at the /data-drivers/jdbc location (reserved location, it cannot be something else), every JDBC driver present in that directory will be automatically added to the CLASSPATH. Then your SAS instructions will be simplified and just mention the URL to configure your JDBC access.
Let's take an example. Assume we want to access a PostgreSQL database using a JDBC driver.
OK. We won't do that in real life because we already have a powerful SAS/ACCESS to PostgreSQL. So, no need to access it through a JDBC driver. This is JUST an example!
To make the /data-drivers/jdbc mount point available to CAS, we need to create a Patch Transformer. Here is an example:
# This block of code is for mounting NFS persistentVolumeClaims and Data
# Connectors for the CAS server.
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: cas-add-jdbc-nfs-mount
patch: |-
- op: add
path: /spec/controllerTemplate/spec/volumes/-
value:
name: sas-viya-jdbc-volume
nfs:
path: /shared/gelcontent/jdbc
server: mynfs.example.sas.com
- op: add
path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
value:
name: sas-viya-jdbc-volume
mountPath: /data-drivers/jdbc
target:
group: viya.sas.com
kind: CASDeployment
name: .*
version: v1alpha1
You can observe that /data-drivers/jdbc points to a specific path (/shared/gelcontent/jdbc) on a NFS server (mynfs.example.sas.com). Anything in /shared/gelcontent/jdbc will be available in /data-drivers/jdbc in CAS.
Note that /data-drivers/jdbc is a reserved path that will be included automatically in the JDBC CLASSPATH. So the directory name is important.
Similarly, to make the /data-drivers/jdbc mount point available to SAS Compute Servers, we also need to create a Patch Transformer. Here is an example:
# This block of code is for mounting NFS persistentVolumeClaims
# to a compute session
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: compute-server-add-jdbc-nfs-mount
patch: |-
- op: add
path: /template/spec/volumes/-
value:
name: sas-viya-jdbc-volume
nfs:
path: /shared/gelcontent/jdbc
server: mynfs.example.sas.com
- op: add
path: /template/spec/containers/0/volumeMounts/-
value:
name: sas-viya-jdbc-volume
mountPath: /data-drivers/jdbc
target:
kind: PodTemplate
version: v1
labelSelector: sas.com/template-intent=sas-launcher
We use the same NFS server and directory. Any driver present here will be available to both CAS and SAS Compute Server.
In our kustomization.yaml file, we need to reference the newly created Patch Transformer resources(assuming I gave them the corresponding names):
transformers:
...
- site-config/cas/cas-add-jdbc-nfs-mount.yaml
- site-config/sas-compute-server/compute-server-add-jdbc-nfs-mount.yaml
...
We are ready to (re)build the site.yaml file and (re)apply it.
For CAS to take the changes into account, we need to restart it. For the Compute Server, any new SAS Studio session will instantiate a new one.
We can start dropping JDBC drivers in /shared/gelcontent/jdbc on the NFS server. In this example, I copied postgresql-42.3.3.jar.
Finally, from SAS Studio, we can test the access to PostgreSQL using JDBC. The LIBNAME statement is very easy because we don't have to worry about the CLASSPATH:
78 libname pg jdbc url="jdbc:postgresql://gel-postgresql.postgres.svc.cluster.local:5432/dvdrental"
79 user="sas" password=XXXXXXXX schema="public" ;
NOTE: Libref PG was successfully assigned as follows:
Engine: JDBC
Physical Name: jdbc:postgresql://gel-postgresql.postgres.svc.cluster.local:5432/dvdrental
80
81 proc datasets lib=pg ;
Directory
Libref PG
Engine JDBC
Physical Name jdbc:postgresql://gel-postgresql.postgres.svc.cluster.local:5432/dvdrental
Schema public
DBMS Product Name PostgreSQL
DBMS Major Version 14
DBMS Minor Version 2
DBMS
Member Member
# Name Type Type
1 ACTOR DATA TABLE
2 ACTOR_INFO DATA VIEW
Similar parameters for the CASLIB statement:
78 cas mysession ;
NOTE: The session MYSESSION connected successfully to Cloud Analytic Services sas-cas-server-default-client using port 5570. The
UUID is 09743acf-f84b-ec43-be9f-3198772ada5b. The user is sasadm and the active caslib is CASUSER(sasadm).
NOTE: The SAS option SESSREF was updated with the value MYSESSION.
NOTE: The SAS macro _SESSREF_ was updated with the value MYSESSION.
NOTE: The session is using 3 workers.
79
80 caslib pg datasource=(srctype="jdbc", url="jdbc:postgresql://gel-postgresql.postgres.svc.cluster.local:5432/dvdrental",
81 user="sas", password=XXXXXXXX, schema="public") libref=caspg ;
NOTE: 'PG' is now the active caslib.
NOTE: Cloud Analytic Services added the caslib 'PG'.
NOTE: CASLIB PG for session MYSESSION will be mapped to SAS Library CASPG.
NOTE: Action to ADD caslib PG completed for session MYSESSION.
82
83 proc casutil ;
NOTE: The UUID '09743acf-f84b-ec43-be9f-3198772ada5b' is connected using session MYSESSION.
84
84 ! list files ;
Caslib Information
Library PG
Source Type jdbc
Uid sas
Session local Yes
Active Yes
Personal No
Hidden No
Transient No
Url jdbc:postgresql://gel-postgresql.postgres.svc.cluster.local:5432/dvdrental
Schema public
CAS File Information
Name Catalog Schema Type Description
actor PG public TABLE
address PG public TABLE
Let's activate some tracing options:
options sastrace="d,,," sastraceloc=saslog nostsuffix ;
libname pg jdbc url="jdbc:postgresql://gel-postgresql.postgres.svc.cluster.local:5432/dvdrental"
user="sas" password="xxxxxx" schema="public" ;
And watch the default CLASSPATH:
77
JDBC: attempting JDBC connection: jdbc:postgresql://gel-postgresql.postgres.svc.cluster.local:5432/dvdrental
JDBC: CLASSPATH:
/opt/sas/viya/home/SASFoundation/lib/access/hadoop/access-hadoop-hivehelper-2.1.7.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.facebook.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.googleanalytics.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.googledrive.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.odata.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.onedrive.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.twitter.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.youtubeanalytics.jar
/data-drivers/jdbc/postgresql-42.3.3.jar
JDBC: DatabaseMetaData#getNumericFunctions=...
You can observe that /data-drivers/jdbc/postgresql-42.3.3.jar is automatically part of the CLASSPATH.
Thanks for reading.
Interesting
i follw this article ,change the driver to oracle jdbc, but fail
libname libTera jdbc classpath="/data-drivers/jdbc"
class="oracle.jdbc.driver.OracleDriver"
URL="jdbc:oracle:oci:@192.168.1.172:1521:system"
user="system"
password="oracle"
;
have following error messages
1 %studio_hide_wrapper;
77
78 libname libTera jdbc classpath="/data-drivers/jdbc"
JDBC: attempting JDBC connection: jdbc:oracle:oci:@192.168.1.172:1521:system
JDBC: CLASSPATH:
/data-drivers/jdbc/
/data-drivers/jdbc/ojdbc8.jar
/opt/sas/viya/home/SASFoundation/lib/access/hadoop/access-hadoop-hivehelper-2.1.5.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.twitter.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.youtubeanalytics.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.facebook.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.googledrive.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.onedrive.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.googleanalytics.jar
/opt/sas/viya/home/lib64/accessclients/jdbc/cdata.jdbc.odata.jar
79 class="oracle.jdbc.driver.OracleDriver"
80 URL="jdbc:oracle:oci:@192.168.1.172:1521:system"
81 user="system"
82 password=XXXXXXXX
83 ;
ERROR: java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: Could not initialize class
oracle.jdbc.driver.BlockSource$ThreadedCachingBlockSource
ERROR: at java.util.concurrent.FutureTask.report(FutureTask.java:122)
ERROR: at java.util.concurrent.FutureTask.get(FutureTask.java:206)
ERROR: at com.sas.access.hadoop.hive.JdbcHelper.getConnection(JdbcHelper.java:835)
ERROR: Caused by: java.lang.NoClassDefFoundError: Could not initialize class
oracle.jdbc.driver.BlockSource$ThreadedCachingBlockSource
ERROR: at oracle.jdbc.driver.BlockSource.createBlockSource(BlockSource.java:102)
ERROR: at oracle.jdbc.driver.BlockSource.createBlockSource(BlockSource.java:94)
ERROR: at oracle.jdbc.driver.PhysicalConnection.setBlockSource(PhysicalConnection.java:881)
ERROR: at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:951)
ERROR: at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:928)
ERROR: at oracle.jdbc.driver.T2CConnection.<init>(T2CConnection.java:250)
ERROR: at oracle.jdbc.driver.T2CDriverExtension.getConnection(T2CDriverExtension.java:94)
ERROR: at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:732)
ERROR: at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:648)
ERROR: at com.sas.access.hadoop.hive.JdbcHelper$ConnectionTimeoutCheck.call(JdbcHelper.java:884)
ERROR: at com.sas.access.hadoop.hive.JdbcHelper$ConnectionTimeoutCheck.call(JdbcHelper.java:866)
ERROR: at java.util.concurrent.FutureTask.run(FutureTask.java:266)
ERROR: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
ERROR: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
ERROR: at java.lang.Thread.run(Thread.java:748)
ERROR: 嘗試建立連線時發生錯誤: java.lang.NoClassDefFoundError: Could not initialize class
oracle.jdbc.driver.BlockSource$ThreadedCachingBlockSource
ERROR: Error in the LIBNAME statement.
84
85 %studio_hide_wrapper;
JDBC download from oracle ,ojdbc8.jar
JDBC and UCP Downloads page (oracle.com)
how fix the error ??
Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 16. Read more here about why you should contribute and what is in it for you!
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.