BookmarkSubscribeRSS Feed

Integrating Windows Shares with SAS Viya on Red Hat OpenShift

Started 3 weeks ago by
Modified 3 weeks ago by
Views 391

SAS 9 supports a wide variety of Operating Systems, including z/OS, Windows, Linux and other Unix flavors. SAS Viya, similarly, supports a wide variety of Kubernetes distributions, including Azure AKS, Amazon EKS, Google GCP, RedHat OpenShift, and vanilla Kubernetes. Yet, all the supported Kubernetes distributions have a single underlying Operating System flavor: Linux. This means that SAS 9 customers migrating to SAS Viya will also have to migrate to Linux.

 

Many SAS 9 customers currently using Windows environments want to keep their new environment as close as possible to their current architecture, even after migrating to SAS Viya.

 

This often leads to questions such as “My team is hosting shared data on Windows Server shared drives. Can I still use those with SAS Viya?”

 

Sometimes, after digging into the business use case, it turns out that it might be better to switch to a different storage technology, maybe provided by the chosen cloud provider, or native to the Linux world in general.

 

Other times, it’s more important to maintain access to shared data without disrupting established business processes.

 

 

Presenting the use case

 

In this article, we’ll focus on a specific use case that our customers face quite often. One (or more) Windows shares host company-wide data, and users want to access that data from SAS Viya deployed on Red Hat OpenShift.

 

While OpenShift simplifies the initial configuration of the storage drivers, most of what we’ll present is also applicable to SAS Viya deployed on any other Kubernetes distribution.

 

For the sake of simplicity, in this article, we’ll use the terms Windows share, SMB share or CIFS share as synonyms, although the former is an Operating system that usually provides the shared storage, while the latter are two similar client/server protocols used to share the data.

 

In our test environment, we are using a file share hosted on Azure and configured with the SMB protocol. We have uploaded a few SAS datasets to use during the tests:

 

01_ER_20251031_01_AzureShare.png

 

Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.

 

 

As a first check, we can attach the share to our Windows client, to make sure that we have all required connection details (server address, share name, userid, password) and that all works as expected:

 

02_ER_20251031_02_WindowsShare.png

 

 

We can also set a few SAS Studio settings that will help verify file access from SAS Viya.

 

As a SAS Viya administrator, open SAS Environment Manager, select the Configuration icon on the left pane, and filter on the SAS Studio Service. Click the pencil icon next to the sas.studio instance to enter in edit mode:

 

03_20251031_03_ConfigureSASStudio-1024x413.png

 

 

Set the following properties, and save (some SAS Studio services will be restarted when you apply the settings)

 

fileNavigationCustomRootPath: /gelcontent
fileNavigationRoot: CUSTOM
showServerFiles: True
serverDisplayName: GelServer

 

In this case, “/gelcontent” is the directory where we’ll mount the share (in the next steps), while “GelServer” is a friendly name we have chosen to identify the SAS backend server.

 

 

Installing the driver

 

Red Hat OpenShift, just like many other Kubernetes platforms, does not include by default the CIFS/SMB CSI driver, required to allow Kubernetes to access SMB shares. One peculiarity of OpenShift is that it provides an Operator to manage the driver, so the installation is in 2 phases: first, install the CIFS/SMB CSI Driver Operator, second, install the driver itself.

 

 

Install the Operator

 

Let’s follow the official Red Hat Documentation. In short, as a cluster administrator, open the OperatorHub and search for “CIFS” or “SMB”. You should find the tile to install the Operator:

 

04_ER_20251031_04_OCP_InstallDriver_Hub.png

 

 

Click on the tile, accept all defaults and select Install. The installer will run, and, at the end, you will be presented with a confirmation dialog:

 

05_ER_20251031_05_OCP_InstallDriver_Done.png

 

 

You can have a quick check. Click the link at the bottom of the confirmation dialog, then select the “CIFS/SMB CSI Driver Operator”; you should see the operator details:

 

06_ER_20251031_06_OCP_InstallDriver_VerifyOperator-1024x783.png

 

 

Next, move to the deployments pane, filter on the openshift-cluster-csi-drivers namespace:

 

07_ER_20251031_07_OCP_InstallDriver_VerifyPods-1024x484.png

 

 

If you have other driver providers installed, you can notice that they might list both an “operator” deployment and a “controller” deployment, while the smb-csi-driver only lists an “operator” deployment. That’s expected: the installation is not done yet. Now, it’s time to set up the driver.

 

 

Install the SBM CSI driver

 

Let’s install the SMB CSI Driver. Again, as a cluster administrator, the simplest way is to use the oc command line utility to submit a short yaml definition file with the following code:

 

echo "---
apiVersion: operator.openshift.io/v1
kind: ClusterCSIDriver
metadata:
    name: smb.csi.k8s.io
spec:
  managementState: Managed" | oc create -f -

 

This will trigger the creation of the driver, including a new Deployment (smb-csi-driver-controller) and a new DaemonSet (smb-csi-driver-node), both in the openshift-cluster-csi-drivers namespace.

 

We can use the console to verify the success of the operation.

 

Expand the Administration menu, select CustomResourceDefinitions, then ClusterCSIDriver:

 

08_ER_20251031_08_OCP_InstallDriver_CRD.png

 

 

Select the Instances tab. It should list the new driver, together with instances of other CSI drivers that were previously installed:

 

09_ER_20251031_09_OCP_InstallDriver_VerifyCRD.png

 

 

Using the SMB driver to mount Windows shares

 

At this point, it’s time to use the CSI driver to connect the cluster to the Windows share. To work with files stored in the Windows share, by making them available to pods running in the cluster, you must use what Kubernetes calls “static provisioning”: create a static volume (PV) that points to the share, then use a Persistent Volume Claim (PVC) to reference the PV in the SAS Viya compute pods.

 

To avoid embedding connection details (username and password) in clear text in the connection definition, the SMB driver references a Kubernetes secret that can safely store that information.

 

Here are all the steps.

 

  1. Set up the environment and create a secret to store the authentication details
    NS="gel-viya"
    USERNAME="myazurestorageaccount"
    PASSWORD="myazurestoragekey"
    DEPLOY="/home/cloud-user/project/gelocp"
    SERVER="${USERNAME}.file.core.windows.net"
    SHARE="gelwinshare"
    SECRET="smbcreds"
    
    # delete the secret in case it was already present
    oc -n ${NS} delete secret smbcreds --ignore-not-found
    
    oc -n ${NS} create secret generic smbcreds \
      --from-literal=username="${USERNAME}" \
      --from-literal=password="${PASSWORD}"

    Obviously, don't just copy and paste the code above: enter the correct values for your environment in each variable (which I forgot to do more than once during my testing ‌😊)

 

  1. Create a static volume that points to the share
    echo "---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      annotations:
        pv.kubernetes.io/provisioned-by: smb.csi.k8s.io
      name: pv-${SHARE}
    spec:
      capacity:
        storage: 100Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: ''
      mountOptions:
        - dir_mode=0777
        - file_mode=0777
        - uid=1001
        - gid=1001
      csi:
        driver: smb.csi.k8s.io
        volumeHandle: ${SERVER}/${SHARE}#
        volumeAttributes:
          source: //${SERVER}/${SHARE}
        nodeStageSecretRef:
          name: ${SECRET}
          namespace: ${NS}" | oc create -f -

    In this case, the mountOptions configure files and directories to be readable and writeable by everyone. Also, the files will appear as owned by user and group 1001, which, by default, corresponds to “sas”.

 

  1. Create a PVC that points to the volume
    echo "---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      annotations:
      name: pvc-${SHARE}
    spec:
      resources:
        requests:
          storage: 80Gi
      accessModes:
        - ReadWriteMany
      volumeName: pv-${SHARE}
      storageClassName: ''" | oc create -f -

    Notice that the storage class is explicitly set to empty, while the volumeName points to the PV just created: these settings indicate to Kubernetes that we are configuring static volume provisioning.

 

  1. Create a patch that targets SAS Compute, Batch and Connect servers, to add the PVC as a new volume mount:
    tee ${DEPLOY}/site-config/compute-server-add-smb-mount.yaml > /dev/null << EOF
    ---
    apiVersion: builtin
    kind: PatchTransformer
    metadata:
      name: compute-server-add-smb-mount
    patch: |-
      - op: add
        path: /template/spec/volumes/-
        value:
          name: winshare-volume
          persistentVolumeClaim:
            claimName: pvc-${SHARE}
      - op: add
        path: /template/spec/containers/0/volumeMounts/-
        value:
          name: winshare-volume
          mountPath: /gelcontent/gelwinshare
    target:
      kind: PodTemplate
      version: v1
      annotationSelector: "sas.com/kustomize-base=sas-programming-environment"
    EOF

    With this patch, SAS pods will make the content of the Windows share accessible at the /gelcontent/gelwinshare directory.

 

  1. Add the patch to the transformers section of your kustomization.yaml (this example uses the yq utility but you can do it in any way, including manually editing the file)
    [[ $(grep -c "site-config/compute-server-add-smb-mount.yaml" ${DEPLOY}/kustomization.yaml) == 0 ]] && \
    yq eval -i '.transformers += ["site-config/compute-server-add-smb-mount.yaml"]' ${DEPLOY}/kustomization.yaml

 

  1. Apply the configuration change to SAS Viya, using the same method as the original installation. For example, if using manual kubectl commands:
    pushd ${DEPLOY}
    kustomize build -o site.yaml
    kubectl apply --selector="sas.com/admin=namespace" -f site.yaml --prune
    popd

 

 

Testing time!

 

The configuration should be finished. Now, it’s time to test.

 

Logon to SAS Studio and submit the following program:

 

libname winshare "/gelcontent/gelwinshare";
 
proc contents data=winshare._all_;
run;

 

The Results window will show the list and details about the files that are available in the Windows share.

 

You can also Select the SAS Server icon on the left menu, then navigate to GelServer → gelcontent, to show the directory tree that lists the share with the same files. Success!

 

10_ER_20251031_10_SASStudioTest.png

 

 

If you want to have a more “technical” confirmation, you can exec into the compute server pod and check there. Assuming you logged into SAS Studio with the username Hugh, you can use the following commands:

 

PODNAME=$(oc -n ${NS} get pods -l launcher.sas.com/username=Hugh \
  -o=custom-columns=PODNAME:.metadata.name --no-headers)
oc -n ${NS} exec -it ${PODNAME} -c sas-programming-environment -- bash -c \
  "df -h; ls -l /gelcontent/gelwinshare"

 

You should see something like this:

 

Filesystem                                                     Size  Used Avail Use% Mounted on
...
//myazurestorageaccount.file.core.windows.net/gelwinshare      120G   75M  120G   1% /gelcontent/gelwinshare
....
total 76872
-rwxrwxrwx. 1 sas sas   131072 Oct 28 21:02 class.sas7bdat
-rwxrwxrwx. 1 sas sas 39321600 Oct 28 20:45 crime.sas7bdat
-rwxrwxrwx. 1 sas sas 39264256 Oct 28 20:45 crimestats.sas7bdat

 

Finally, if you have SAS Enterprise Guide installed on your local Windows client, and you login to the SAS Viya environment, the new share (with all its content) will also appear there. Bonus point, if you have a local SAS Foundation installed on the Windows client where we initially verified the share, you will be able to access your data from both sides (local and remote):

 

11_ER_20251031_11_EGTest.png

 

 

Connecting the same share to your Windows client and to SAS Viya can greatly simplify your workflow. If you receive a csv file via email, simply save it in the shared drive on your client (in the example above, that’s the Z:\ drive). It will be immediately available to ingest into SAS Viya.

 

 

Additional Considerations

 

You might have noticed that the SMB CSI driver, just like many other storage drivers, does not enforce any size limits on the provided volume. In the examples above, we provisioned a file share with a 120GB size. The PV requested 100GB and the PVC requested 80GB, yet, each pod had access the full 120GB of the share.

 

More importantly, the supported authentication process introduces some limitations:

 

  • Only username and password authentication are supported. OpenShift does not support Kerberos (see https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage/using-contai... )
  • Since the authentication details are stored in a fixed Kubernetes secret that is read when mounting the share on the backend pod(s), all users will access data with that shared account and not with their personal identity. This is similar to mounting a network share on your local laptop using a shared group account. That's the reason why, in the PV definition, the mountOptions are set to 777.

 

 

Conclusion

 

Integrating Windows (SMB/CIFS) shares with SAS Viya on Red Hat OpenShift is not only feasible but can significantly streamline workflows for organizations transitioning from SAS 9 to cloud-native environments. By following the outlined steps, from driver installation to persistent volume configuration, teams can maintain access to critical shared data without disrupting established business processes.

 

Is your organization planning a migration or facing similar storage challenges? Share your experiences or questions in the comments below.

 

 

Find more articles from SAS Global Enablement and Learning here.

Contributors
Version history
Last update:
3 weeks ago
Updated by:

sas-innovate-2026-white.png



April 27 – 30 | Gaylord Texan | Grapevine, Texas

Registration is open

Walk in ready to learn. Walk out ready to deliver. This is the data and AI conference you can't afford to miss.
Register now and lock in 2025 pricing—just $495!

Register now

SAS AI and Machine Learning Courses

The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.

Get started

Article Tags