BookmarkSubscribeRSS Feed
☑ This topic is solved. Need further help from the community? Please sign in and ask a new question.
Jovian
Obsidian | Level 7

Hi Everyone, 

 

I am trying to integrate the Hadoop with SAS Viya4 LTS 2025.03. It is a normal installation with no extra add-ons.

 

I have run the tracer script. 

I have created an nfs which is accessible to all nodes of the cluster, with the jar files being placed in a directory named /nfs/data-drivers/hadoop/jars/(the jar file is here)

 

Do i need the files from $DEPLOY/sas-bases/example/data-access. 

 

Where do I make this entry - 

Jovian_0-1761552079894.png

link -  https://documentation.sas.com/doc/ru/pgmsascdc/v_061/lestmtsglobal/p0db12w43txk8xn1mnoqe84ylxhi.htm

1 ACCEPTED SOLUTION

Accepted Solutions
gwootton
SAS Super FREQ
Yes, you could add the same options statements to the autoexec content for your compute context (to apply to a single context) or sas.compute.server: autoexec_code to apply to all compute servers. Since this code is setting an environment variable you could also set this in sas.compute.server: startup_commands using "export <environment variable>=<path>" lines.
--
Greg Wootton | Principal Systems Technical Support Engineer

View solution in original post

4 REPLIES 4
gwootton
SAS Super FREQ
You mentioned you created an NFS volume accessible to all nodes, did you mount that into the pods that would be using it?
The README.md in $deploy/sas-bases/examples/data-access discussing setting these variables (the recommendation being to use an options set statement rather than setting in them in sas-access.properties), as well as mounting the Hadoop files into your pods (i.e. using data-mounts-*.sample.yaml files in the same path).
--
Greg Wootton | Principal Systems Technical Support Engineer
Jovian
Obsidian | Level 7

Hi @gwootton,

 

I have used the following patch in all of the data-mount-*.sample.yaml. 

 

patch: |-
  - op: add
    path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
    value:
      - name: data-mounts-deployment
        value:
          name: data-drivers
          mountPath: "/data-drivers"
  - op: add
    path: /spec/controllerTemplate/spec/volumes/-
    value:
    name: data-drivers
    nfs:
      server: xxx.xxx.xxx.xxx
      path: /nfs/data-driver

I have spoken to my colleagues and my understanding is that the options for  the jars and config path can be set during runtime 
For eg- 

options set=SAS_HADOOP_JAR_PATH="/data-drivers/hadoop/jars";
options set=SAS_HADOOP_CONFIG_PATH="/data-drivers/hadoop/conf/";

libname a hadoop server="test.hadoop.com" port=10000 schema="testing" class=com.cloudera.hive.jdbc.HS2Driver url="<jdbc url>";

Is there a way have the paths persist across users and sessions by either having to make a change to either the Compute context or some other config.

gwootton
SAS Super FREQ
Yes, you could add the same options statements to the autoexec content for your compute context (to apply to a single context) or sas.compute.server: autoexec_code to apply to all compute servers. Since this code is setting an environment variable you could also set this in sas.compute.server: startup_commands using "export <environment variable>=<path>" lines.
--
Greg Wootton | Principal Systems Technical Support Engineer
Jovian
Obsidian | Level 7

Thanks so much @gwootton.