Introduction
Dating back to the start of 2025, Snowflake has been discussing major upcoming changes to their authentication systems. Having joined the Multi-Factor Authentication (MFA) train, they can now claim stronger defenses for their constituents against potential breaches. MFA accomplishes this by requiring a second degree of proof of identity upon login. For users of the web UI Snowsight, this is easy – put in your user ID, await the push notification on your phone from Duo (the default provider), and confirm there that the login is authentic.
Unsurprisingly, the default push notification method for MFA has its detractions, and no shortage of dissenters. Though it is factually more secure than the incumbent method of user/password (plus the network policy of CIDR-block allow/block lists), some users prefer not to engage with a cell phone in order to do their jobs; others using third party software, like SAS, don't enjoy the repetitive pinging when accessing Snowflake endpoints; and others still find it completely incompatible with scheduled jobs. After all, nobody wants to wake up at 3AM just to approve their service user’s daily workload… right?
SAS 9 is no exception to these blockers – many SAS 9 & EG users alike are looking to SAS for guidance on how to navigate the upcoming MFA mandates without interrupting their pipelines before the mandates go into effect. This post will address exactly that using key-pair authentication, one of three main methodologies recommended by Snowflake. Before we get started, let’s iron out which personas you need to future-proof your SAS & Snowflake pipelines:
Now, let’s dig in step by step on how to implement the key-pair methodology. Please note that this is supported for Maintenance versions 9.4M6 and later.
Section 1: Ensuring Permitted SAS/Snowflake Traffic
The first thing a Snowflake admin needs to do is configure (or confirm) networking rules for the Snowflake account that will accept incoming requests from SAS 9. For server-side SAS 9 environments, the only IP Address needed is the server. For client-side SAS 9 environments, however, the IP Addresses of all users’ machines are necessary – this can be quite a hassle, which is where the common corporate VPN comes in handy. In this case, all that’s necessary is the CIDR block occupied by the corporate VPN. As long as SAS users are connected to the VPN, Snowflake won’t block their client requests. Once the full list of allowed IP’s or CIDR blocks is prepared, a Snowflake admin can alter the account’s network policy as follows:
In this code block, we assume the proper role and obtain the name of the active NETWORK POLICY. We then create a NETWORK RULE (this can always be altered later if the allowlist must change) that can the Snowflake environment to accept traffic requests from users leveraging the SAS 9 environment. Lastly, we add the rule to the policy, enacting this conditional access to the environment.
Section 2. Setting up Key-Pair Authentication in the Snowflake Account
Before demonstrating the process for giving an individual user the possibility to authenticate using a key-pair, a note: if your SAS environment contains a large user base (i.e. hundreds of users), the manual provisioning of key-pair authentication can be tedious, even grueling. It is strongly recommended to investigate the usage of scripting and user-defined functions to automatically generate the key-pairs and assign them to Snowflake users, respectively. With that noted, let’s discuss how to implement each user’s key-pair authentication. For optimal practices, the execution of Snowflake statements in this section should be conducted by an administrator leveraging the SECURITYADMIN role (or higher).
Section 3a: SAS 9 Client-Side Windows ODBC Set-Up
For many SAS 9 users, there is no central server to which all users connect. Sometimes, users, especially in test & dev environments, design their workloads and run their compute from their home machine. This section addresses how those users can individually configure their Snowflake connections to satisfy the MFA mandate:
Section 3b: SAS 9 Server ODBC Set-Up
Unlike single-user environments, many deployments of SAS 9 are centralized on a Linux server. For the individual user, the access to SAS may still be via the SAS 9 client program, or Enterprise Guide, but the compute and external connections for all users are centralized on some remote machine. This section addresses how a SAS administrator can configure multiple users’ key-pair connections from the SAS Server to Snowflake.
For SAS 9 environments with a significant number of users, the process of creating unique key-pairs for every user and implementing them in a POSIX-controlled structure on the SAS 9 server can be tedious. A shell script to loop through the user base and manage the permissions on the private key files can relieve much of this difficulty.
Section 4: Connecting from SAS to the Snowflake Account
Now that the Snowflake account has been successfully configured, the final step is to test the connection from SAS itself. Just as with any other SAS library, the connection to Snowflake is created with a LIBNAME statement.
For client-side SAS 9 installations, the LIBNAME statement can directly reference the DSN, only overwriting the connection parameters that may change from a day-to-day basis at the user’s discretion, like database, schema, or role:
Based on the options entered when the DSN was configured on the local machine, any unlisted connection options will be automatically populated from the DSN itself. Thus, there’s no need to reference the options like PRIV_KEY_FILE, PRIV_KEY_FILE_PWD, or authenticator in the LIBNAME statement. Users are, as always, free to add other LIBNAME options to their statements.
For a server-side SAS 9 installation, the LIBNAME statement requires a little more individual configuration and change from its pre-MFA code. With a server-side installation, the LIBNAME statement references the DSN defined by the administrator in the ODBC.ini file on the Linux server. In the standard setup wherein multiple users connect to the same SAS 9 server, said users will have different private key files & passwords, as discussed in Section 3b. As such, the DSN itself could not define a single blanket value for PRIV_KEY_FILE or PRIV_KEY_FILE_PWD to propagate to all users. Instead, each user must define those two connection options in their LIBNAME statement, pointing to their respective key and password:
This methodology allows multiple users to leverage the same Snowflake DSN (in turn, leveraging the Snowflake Driver configured on the machine) but with unique key-pairs, thus satisfying both MFA requirements and organizational security mandates.
Parting Words
I hope this article provides clarity for users and administrators alike in joint deployments of SAS 9 & Snowflake. The key-pair authentication methodology provides a safe and secure pathway to satisfying Snowflake’s MFA requirements without potentially sacrificing workload efficiency by using push notifications. This can play a large role in reducing blockers, especially for teams with significant scheduled or service workloads.
In future updates I will explore the step-by-step differences in setup for this methodology for users of SAS Viya, in addition to discussing the wider breadth of supported authentication options that Viya brings. Feel free to send any questions or comments to me at Joe.Cabral@sas.com. As always, thanks for reading, and for playing my new favorite writer’s game: “is this hyperlink for documentation or a 90’s music video?”
Hi Joe,
Thanks for a very thorough set of instructions to set up the snowflake connector using ODBC and also the SNOW engines. The challenge I am facing now is to try to hide the RSA password from my users (I am a SAS Admin). If I use the SNOW engine and register the tables, the user can still right-click to the library in SAS Enteprise Guide and look at properties and see the libname statement for that library with the RSA key location and the password if encrypted. If I use ODBC, the ODBC ini file becomes accessible to the user and will result in the same ending.
Is there a way of hiding the password for the RSA key and for either engines (ODBC and SNOW) to work?
Many thanks.
@RGarrido - The way we hide our Snowflake account and password details using SAS/ACCESS Interface to Snowflake is by creating an Authentication Domain via SAS Management Console. We then create a Snowflake user group via User Manager then add the Snowflake account and password in the Accounts tab like so:
Then in the Snowflake Server connection definition in SMC, the Snowflake Authentication domain is used:
You can then create Data Library definitions in SMC that use the Snowflake server and connection definitions.
@SASKiwi - thank you for your response. We use authdomain to hide username and password when creating SAS to Snowflake connection but with an RSA key, it won't work because to connect to snowflake using an RSA key uses the conopts command like so:
LIBNAME TEST SNOW
SERVER="snowflakecomputing.com"
DATABASE='TESTONE'
WAREHOUSE='DEMO_ONE'
SCHEMA='TITLE'
ROLE='USER_ROLE'
USER='SASUSER_01'
CONOPTS="AUTHENTICATOR=SNOWFLAKE_JWT;
PRIV_KEY_FILE=/LOCATION/OF/KEY/rsa_key.p8;
PRIV_KEY_FILE_PWD=XXXXXXXXXXX;";
Adding an authdomain to hide the username and priv_key_file like so:
LIBNAME TEST SNOW
SERVER="snowflakecomputing.com"
DATABASE='TESTONE'
WAREHOUSE='DEMO_ONE'
SCHEMA='TITLE'
ROLE='USER_ROLE'
CONOPTS="AUTHENTICATOR=SNOWFLAKE_JWT;
PRIV_KEY_FILE=/LOCATION/OF/KEY/rsa_key.p8;"
AUTHDOMAIN=USER_AUTHDOMAIN;
The password inside the AUTHDOMAIN does not get passed to the authenticator (snowflake_jwt).
Hi @RGarrido - if I'm understanding your situation properly, there are a few options that might work here. I have yet to test these myself to confirm, but my initial ideas are as follows:
Let me know if you have any success with these options and if they meet your system's requirements. If not, feel free to schedule a call with me and I will update this thread with any conclusions we reach.
@JCabralSAS - For the first two you have suggested, the pwencode will not work because the encoded password is framed inside the conopts command (java based). Neither will a macro variable when called inside the conopts command. So:
PRIV_KEY_FILE_PWD={SAS002}706634535C9F3B793CCCB39C4F553FA2; <- will not be read by conopts nor
PRIV_KEY_FILE_PWD=&password.;
I tried using ODBC engine and while it works, you need to chmod the permissions for the odbc.ini path for users to read the file which means that they can get to the password using a terminal and running a command such as: cat /etc/odbc.ini and examine the odbc.ini file where the password is stored.
For the third option you suggested users having their own private key files, this will be difficult to control from a security stand point. Normally, a username and password (snowflake account) is embedded in an authdomain and we assign AD groups to these authdomains. Similarly, if an RSA key is used, the key will be shared across an AD group else if each user has their own unique private key, these need to be mapped to the snowflake service account thru a jenkins pipeline job. In the end, this means that one snowflake service account gets mapped to multiple service keys instead of just one.
Hi @RGarrido - thanks for following up. Regarding the third option:
The current setup of Snowflake's security regarding RSA Key Pairs makes the assumption/expectation that every user has a separate and manually assigned Private Key Pair. While unfortunately tedious and a hassle to manage for large user bases, it is the only officially supported mechanism bridging SAS9 and Snowflake without triggering MFA push notifications.
One workaround might be to create an ODBC.ini file that contains DSN's for all service accounts and nothing else - essentially a long list .ini file containing the values for each service account's PRIV_KEY_FILE & PRIV_KEY_FILE_PWD. These, in turn, are linked to the respective Snowflake users that they operate as. Then, you can chmod that ODBC.ini file to only be readable by the users/groups/identities in the server that are assumed by the service users. This would prevent any of your human users from leveraging a terminal to print out the ODBC.ini file to examine its contents.
Now, this does mean you would need a separate ODBC.ini file that's readable for your human users. Since the ODBCINI environment variable is defined before the autoexec, the configuration would have to be made before launching SAS, which might pose a major problem that's beyond my knowledge. I'll keep thinking on ways that could allow linking multiple different ODBC.ini files, but I haven't seen that done before.
If you are able to leverage two separate ODBC.ini files at different times, then, all your users would have to do is submit the CONOPTS with the DSN name and the key file & password associated with their Snowflake human user. This assumes that every human user in SAS has a human user in Snowflake. If that's not the case, you will likely have human users with shared access to a Snowflake account & therefore shared access to its associated private key file & password. In the absence of the second file workaround, we can pass the workaround to the user (e.g. have them define everything except the driver about their Snowflake connection in their CONOPTS), though I've been desperately trying to avoid doing that.
If all else fails, you can try leveraging PAT's (Programmatic Access Tokens) for authentication, basically treating the PAT as a password in a LIBNAME statement, including in Auth Domains. I have seen some success here and the execution is pretty simple, BUT with the major caveat that it is NOT officially supported by SAS, so you may have to experiment with it.
I apologize for the inconvenience of these recommendations, but I do hope that this points you in the right direction.
Thanks for this @JCabralSAS . I have gone through the path you outlined and couldn't find a solution that would actually work. The closest one would be to use the ODBC engine and mask the password and maybe the path to the private key file using some environmental variables. As you mentioned there are a lot of challenges and there are other options like PAT. user FreshStarter is also keen to find out what works best for this so I guess there are others out there facing these challenges.
Where you are you able to fix the issue? Can you share the Auth Domain setup config and where the PAT token was saved?
Will this odbc.ini config work with Auth domain setup?
ODBC.ini
[ODBC Data Sources] Snowflake_pat = Snowflake ODBC Driver 64-bit [Snowflake_pat] Driver=/usr/lib64/snowflake/odbc/lib/libSnowflake.so server=<xxx.snowflakecomputing.com> role=<snowflake_consumer> warehouse=<snowflake_wh> database=<db_name> authenticator=programmatic_access_token
SAS code
libname PATSNOW snow
dsn = "Snowflake_pat"
authdomain = "snowflake_pat_auth"
shema = <schema>
;
proc sql;
connect using PATSNOW as snow;
execute(
create or replace table &db_primary..&schema_temp..sample_table as
select *
from &db_primary..&schema_temp..tmp_table
limit 100
;
) by snow;
disconnect from snow;
quit;
libname PATSNOW clear;
Incase If we do not use auth domain, can we directly use the PAT token in the odbc.ini?
Hi @Soundappan
I have switched the authentication from PAT to a key pair, and it works for me. As I mentioned earlier, the PAT token cannot be added to the AuthDomain in the SAS Management Console due to a length constraint. The PAT token is 222 characters long, while the AuthDomain allows a maximum of 128 characters.
However, I recall that when we hardcoded the PAT in the LIBNAME statement using the PWD parameter, it worked.
Similarly, if we hardcode the same PAT in the odbc.ini file under the Password parameter, it will work as well.
@RGarrido - Completely understand that using a CONOPTS mechanism isn't going to work with AUTHDOMAIN. Needs to be via a SAS-supported option.
Thank you, @JCabralSAS , for your post.it was very useful.
We are planning to switch from password-based authentication to key-pair authentication for SAS to Snowflake. Our current requirement is:
1. Overnight batch/service account: key-pair authentication
2.User authentication: normal password authentication with DUO MFA
We are using SAS 9.4M7 on a RHEL server.
I have created two DSNs in the odbc.ini file:
1.Password-based DSN: without the authenticator parameter
2.Key-pair DSN: with authenticator=snowflake_jwt and the PRIVATE_KEY_FILE and PRIVATE_KEY_FILE_PWD parameters
Using the key-pair DSN, I can successfully run the LIBNAME statement from SAS Enterprise Guide and connect to Snowflake. Also I can able to define a SAS pre-assigned library for key-pair authentication and I can successfully able to use that.
Challenge
Regarding the Private_key_file path and passphrase password on odbc.ini file
Questions:
1. I will not able to pass PRIV_KEY_FILE and PRIV_KEY_FILE_PWD directly in the odbc.ini file due to security concern. I noticed your Windows example included these parameters—can this approach be applied to Linux ODBC drivers?
2.If it is possible, would these parameters need to be DSN-specific? (We have one DSN for password authentication and another for key-pair authentication.)
I also attempted to set environment variables as follows, but it did not work:
export PRIV_KEY_FILE
export PRIV_KEY_FILE_PWD
Could you please advise on the recommended approach to overcome these issues?
Thank you for your time and guidance.
Hi @freshstarter, thank you for reading. I'm glad it helped!
Security of the PRIV_KEY_FILE & PRIV_KEY_FILE_PWD content has been a tricky thorn in my side, especially since I'm more of a Snowflake user-admin than a security guru. BUT let's see what we can do.
As for the export issue, as another user mentioned, the export command for setting Linux environment variables doesn't automatically propagate into the SAS system. There's a quick how-to on SAS's Customer Support Center plus some documentation here that dives well beyond my knowledge about setting and ingesting environment variables into SAS 9. I hope this helps!
Here is a step forward to coding the libname for SAS to Snowflake Connector using an RSA key + Passphrase. Replace USER with AUTHDOMAIN.
LIBNAME TEST SNOW
SERVER="snowflakecomputing.com"
DATABASE='TESTONE'
WAREHOUSE='DEMO_ONE'
SCHEMA='TITLE'
ROLE='USER_ROLE'
USER='SASUSER_01'
CONOPTS="AUTHENTICATOR=SNOWFLAKE_JWT;
PRIV_KEY_FILE=/LOCATION/OF/KEY/rsa_key.p8;
PRIV_KEY_FILE_PWD=XXXXXXXXXXX;";
With the above code, there is no filtering on the part of the users that can run the code. Those with SAS access can run the code and see the tables. What I have tested is to add the AUTHDOMAIN into the code and remove the USER line. This will do three things:
1. Filter the right users to have access to the connection. Only those groups who are given access with the AUTHDOMAIN are able to run the code.
2. In SAS EG, right clicking to the library use to give full information about the libname. Now, those details are hidden including the USER ID.
3. USERID is hidden from user. This is an important component for logging to snowflake. Without a userid, a user cannot no login even with an RSA key and a passphrase.
The code that I ran is like the one below. NO USER line and uses AUTHDOMAIN:
LIBNAME TEST SNOW
SERVER="snowflakecomputing.com"
DATABASE='TESTONE'
WAREHOUSE='DEMO_ONE'
SCHEMA='TITLE'
ROLE='USER_ROLE'
AUTHDOMAIN='TEST_RSA_CONNECT'
CONOPTS="AUTHENTICATOR=SNOWFLAKE_JWT;
PRIV_KEY_FILE=/LOCATION/OF/KEY/rsa_key.p8;
PRIV_KEY_FILE_PWD=XXXXXXXXXXX;";
The SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment.
Learn how to explore data assets, create new data discovery agents, schedule data discovery agents, and much more.
Find more tutorials on the SAS Users YouTube channel.