Hi Everyone,
I am trying to integrate the Hadoop with SAS Viya4 LTS 2025.03. It is a normal installation with no extra add-ons.
I have run the tracer script.
I have created an nfs which is accessible to all nodes of the cluster, with the jar files being placed in a directory named /nfs/data-drivers/hadoop/jars/(the jar file is here)
Do i need the files from $DEPLOY/sas-bases/example/data-access.
Where do I make this entry -
link - https://documentation.sas.com/doc/ru/pgmsascdc/v_061/lestmtsglobal/p0db12w43txk8xn1mnoqe84ylxhi.htm
Hi @gwootton,
I have used the following patch in all of the data-mount-*.sample.yaml.
patch: |-
- op: add
path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
value:
- name: data-mounts-deployment
value:
name: data-drivers
mountPath: "/data-drivers"
- op: add
path: /spec/controllerTemplate/spec/volumes/-
value:
name: data-drivers
nfs:
server: xxx.xxx.xxx.xxx
path: /nfs/data-driverI have spoken to my colleagues and my understanding is that the options for the jars and config path can be set during runtime
For eg-
options set=SAS_HADOOP_JAR_PATH="/data-drivers/hadoop/jars"; options set=SAS_HADOOP_CONFIG_PATH="/data-drivers/hadoop/conf/"; libname a hadoop server="test.hadoop.com" port=10000 schema="testing" class=com.cloudera.hive.jdbc.HS2Driver url="<jdbc url>";
Is there a way have the paths persist across users and sessions by either having to make a change to either the Compute context or some other config.
Thanks so much @gwootton.
Dive into keynotes, announcements and breakthroughs on demand.
Explore Now →