BookmarkSubscribeRSS Feed
JCabralSAS
SAS Employee

Introduction

 

Dating back to the start of 2025, Snowflake has been discussing major upcoming changes to their authentication systems. Having joined the Multi-Factor Authentication (MFA) train, they can now claim stronger defenses for their constituents against potential breaches. MFA accomplishes this by requiring a second degree of proof of identity upon login. For users of the web UI Snowsight, this is easy – put in your user ID, await the push notification on your phone from Duo (the default provider), and confirm there that the login is authentic.

 

Unsurprisingly, the default push notification method for MFA has its detractions, and no shortage of dissenters. Though it is factually more secure than the incumbent method of user/password (plus the network policy of CIDR-block allow/block lists), some users prefer not to engage with a cell phone in order to do their jobs; others using third party software, like SAS, don't enjoy the repetitive pinging when accessing Snowflake endpoints; and others still find it completely incompatible with scheduled jobs. After all, nobody wants to wake up at 3AM just to approve their service user’s daily workload… right?

 

JCabralSAS_0-1756320230816.png

 

SAS 9 is no exception to these blockers – many SAS 9 & EG users alike are looking to SAS for guidance on how to navigate the upcoming MFA mandates without interrupting their pipelines before the mandates go into effect. This post will address exactly that using key-pair authentication, one of three main methodologies recommended by Snowflake. Before we get started, let’s iron out which personas you need to future-proof your SAS & Snowflake pipelines:

 

  1. An admin user for the Snowflake account – preferably someone with access to the ACCOUNTADMIN role.
  2. An admin user for the SAS 9 license.
    1. For server-side SAS 9 environments, the admin must have access to central machine on which SAS 9 is running.
    2. For client-side SAS 9 environments, users will individually configure their machines – but some team administrator will still be required to generate key pairs.

 

Now, let’s dig in step by step on how to implement the key-pair methodology. Please note that this is supported for Maintenance versions 9.4M6 and later.

 

 

Section 1: Ensuring Permitted SAS/Snowflake Traffic

 

The first thing a Snowflake admin needs to do is configure (or confirm) networking rules for the Snowflake account that will accept incoming requests from SAS 9. For server-side SAS 9 environments, the only IP Address needed is the server. For client-side SAS 9 environments, however, the IP Addresses of all users’ machines are necessary – this can be quite a hassle, which is where the common corporate VPN comes in handy. In this case, all that’s necessary is the CIDR block occupied by the corporate VPN. As long as SAS users are connected to the VPN, Snowflake won’t block their client requests. Once the full list of allowed IP’s or CIDR blocks is prepared, a Snowflake admin can alter the account’s network policy as follows:

 

JCabralSAS_1-1756320442311.png

 

In this code block, we assume the proper role and obtain the name of the active NETWORK POLICY. We then create a NETWORK RULE (this can always be altered later if the allowlist must change) that can the Snowflake environment to accept traffic requests from users leveraging the SAS 9 environment. Lastly, we add the rule to the policy, enacting this conditional access to the environment.

 

 

Section 2. Setting up Key-Pair Authentication in the Snowflake Account

Before demonstrating the process for giving an individual user the possibility to authenticate using a key-pair, a note: if your SAS environment contains a large user base (i.e. hundreds of users), the manual provisioning of key-pair authentication can be tedious, even grueling. It is strongly recommended to investigate the usage of scripting and user-defined functions to automatically generate the key-pairs and assign them to Snowflake users, respectively. With that noted, let’s discuss how to implement each user’s key-pair authentication. For optimal practices, the execution of Snowflake statements in this section should be conducted by an administrator leveraging the SECURITYADMIN role (or higher).

 

  1. To generate the key-pair, follow along with Snowflake’s official documentation. It is recommended to use an encrypted key. Do not deviate from Snowflake’s key-generation procedure, as differently formatted keys will fail.
  2. Keep track of the location of both the private key file (.p8) and the private key file’s password. It may be useful to create a user directory structure to track which files and password are associated with each user. It may also be useful for users to store their passwords in a key vault (on top of ideally remembering it).
  3. Use the ALTER USER statement to add the RSA_PUBLIC_KEY parameter. If the user does not yet exist in the Snowflake environment, you can create them as follows:

    JCabralSAS_2-1756320885498.png

     

  4. In current (as of this article, dated August 2025) Snowflake bundles, MFA has not yet been made 100% mandatory. As such, there are still opt-in’s and opt-out’s. To ensure that the key-pair methodology is working, enable MFA for the user. For now, whenever the user authenticates without the key, they will have to address push notifications to satisfy MFA:

    JCabralSAS_3-1756320885498.png

     

     
  5. With the Snowflake-side management handled – if using client-side SAS 9 on Windows, go to the SAS 9 Client-Side Windows ODBC Set-Up section. If using a server-side SAS 9 environment (regardless of the client OS), go to the SAS 9 Server ODBC Set-Up Section. For client-side Mac and Linux SAS 9 environments, the procedure will align very closely with the Server ODBC Set-Up, the main difference being the file references are on the client machine, not a centralized server.

 

 

Section 3a: SAS 9 Client-Side Windows ODBC Set-Up

For many SAS 9 users, there is no central server to which all users connect. Sometimes, users, especially in test & dev environments, design their workloads and run their compute from their home machine. This section addresses how those users can individually configure their Snowflake connections to satisfy the MFA mandate:

 

  1. If not already done, follow the Snowflake documentation to install the ODBC driver on the machine. Make sure not to skip the section on the “Visual C++ Redistributable for Visual Studio 2015.”
  2. With the driver installed, open the “ODBC Data Sources (64-bit)” program on the machine.

    JCabralSAS_4-1756321380211.png

     

  3. Click “Add” to create a new Data Source Name (User DSN and System DSN both work here – but note which one!), select the “SnowflakeDSIIDriver” option, click “Finish,” and populate it. Note that the database, schema, warehouse, and role can all be overwritten by a LIBNAME statement, so long as the Snowflake user has access to the requested resources. Also note the inclusion of the snowflake_jwt value in the Authenticator field. The authenticator tells the system not to use a username & password combination to login to Snowflake, instead using the private key file & password:

    JCabralSAS_5-1756321380213.png

     

  4. Clicking the “Test” button at this point will fail, because while the ODBC system knows not to leverage a user & password to authenticate, it does NOT yet know where the key file & password that it must use are. In order to instruct the system to leverage this information, they must be defined with respect to the DSN. With the newest versions of the ODBC program & Snowflake Driver, this can actually be done in the above GUI by entering the private key file location and password in their respective cells. In this case, enter both the file path and the password without quotes, then skip to step 9. In older versions, like the one in the above screenshot, we must instead use the "Registry Editor". Click “OK” and open the Registry Editor (regedit) program on the machine:

    JCabralSAS_6-1756321380214.png

     

  5. Registry Editor is useful for entering options for ODBC DSN’s that aren’t visually present in the Windows GUI. The two options necessary for this implementation are PRIV_KEY_FILE and PRIV_KEY_FILE_PWD, neither of which are present in the GUI from Step 3.
    1. If Step 3 created a USER DSN: follow the path “Computer\HKEY_CURRENT_USER\Software\ODBC\ODBC.ini.”
    2. If Step 3 created a SYSTEM DSN: follow the path “Computer\HKEY_LOCAL_MACHINE\Software\ODBC\ODBC.ini.”
  6. Locate the entry under ODBC.ini corresponding to the newly created DSN:

    JCabralSAS_7-1756321380215.png

     

  7. Right click anywhere in the right-side of the screen (which contains all the keys & values associated with the DSN) and select “New -> String Value.” Enter the name PRIV_KEY_FILE. Repeat this with a value of PRIV_KEY_FILE_PWD.
  8. At this point, there are two unpopulated options defined for the DSN. To define each one, simply double-click it and paste in the value.
    1. For the PRIV_KEY_FILE option, copy the file path of the private key (.p8) file located on the machine. Do NOT surround the file path with quotes.
    2. For the PRIV_KEY_FILE_PWD option, copy the password defined for the encryption when the key was generated.
    3. Close the window.
  9. With the options added to the DSN and defined, the DSN can be tested. Reopen the GUI in Step 3 by double-clicking its corresponding entry in the ODBC menu, then select “Test.” If successful, there will be a confirmation dialog, and there will be no push notifications sent to the MFA-enrolled user. Additional confirmation of success can be done by editing the ODBC DSN definition, removing the snowflake_jwt value from the authenticator option, and re-running the “Test” button. In this case, the user should immediately be sent a push notification asking for identity verification, confirming the authenticator is doing its job. Re-enter the snowflake_jwt value and close the ODBC Data Sources window.

 

 

Section 3b: SAS 9 Server ODBC Set-Up

Unlike single-user environments, many deployments of SAS 9 are centralized on a Linux server. For the individual user, the access to SAS may still be via the SAS 9 client program, or Enterprise Guide, but the compute and external connections for all users are centralized on some remote machine. This section addresses how a SAS administrator can configure multiple users’ key-pair connections from the SAS Server to Snowflake.

 

  1. If not already installed on the Linux machine, follow the Snowflake documentation to install the ODBC Driver.
  2. Locate the following server-side files: ODBC.ini, ODBCINST.ini, and simba.snowflake.ini as listed in the Snowflake documentation.
    1. If the Snowflake driver has not been configured at all, continue following the directions to configure the simba.snowflake.ini and ODBCINST.ini files.
  3. In the ODBC.ini file, create a line for the DSN connection to the Snowflake account:

    JCabralSAS_12-1756321887170.png

     

  4. Then, add an entry for the DSN’s options, and add the following information:

    JCabralSAS_13-1756321887176.png

     

  5. Do NOT include the PRIV_KEY_FILE or PRIV_KEY_FILE_PWD in the DSN definition UNLESS the environment intentionally uses the SAME key-pair for ALL Snowflake users. Sharing key-pairs across multiple users is heavily discouraged. As such, in most, if not all, production environments, every user will have a unique key-pair, which will be referenced in the LIBNAME statements that leverage this DSN.

 

For SAS 9 environments with a significant number of users, the process of creating unique key-pairs for every user and implementing them in a POSIX-controlled structure on the SAS 9 server can be tedious. A shell script to loop through the user base and manage the permissions on the private key files can relieve much of this difficulty.

 

 

Section 4: Connecting from SAS to the Snowflake Account

Now that the Snowflake account has been successfully configured, the final step is to test the connection from SAS itself. Just as with any other SAS library, the connection to Snowflake is created with a LIBNAME statement.

 

For client-side SAS 9 installations, the LIBNAME statement can directly reference the DSN, only overwriting the connection parameters that may change from a day-to-day basis at the user’s discretion, like database, schema, or role:

 

JCabralSAS_14-1756322832798.png

 

Based on the options entered when the DSN was configured on the local machine, any unlisted connection options will be automatically populated from the DSN itself. Thus, there’s no need to reference the options like PRIV_KEY_FILE, PRIV_KEY_FILE_PWD, or authenticator in the LIBNAME statement. Users are, as always, free to add other LIBNAME options to their statements.

 

For a server-side SAS 9 installation, the LIBNAME statement requires a little more individual configuration and change from its pre-MFA code. With a server-side installation, the LIBNAME statement references the DSN defined by the administrator in the ODBC.ini file on the Linux server. In the standard setup wherein multiple users connect to the same SAS 9 server, said users will have different private key files & passwords, as discussed in Section 3b. As such, the DSN itself could not define a single blanket value for PRIV_KEY_FILE or PRIV_KEY_FILE_PWD to propagate to all users. Instead, each user must define those two connection options in their LIBNAME statement, pointing to their respective key and password:

 

JCabralSAS_16-1756322956125.png

 

This methodology allows multiple users to leverage the same Snowflake DSN (in turn, leveraging the Snowflake Driver configured on the machine) but with unique key-pairs, thus satisfying both MFA requirements and organizational security mandates.

 

 

Parting Words

I hope this article provides clarity for users and administrators alike in joint deployments of SAS 9 & Snowflake. The key-pair authentication methodology provides a safe and secure pathway to satisfying Snowflake’s MFA requirements without potentially sacrificing workload efficiency by using push notifications. This can play a large role in reducing blockers, especially for teams with significant scheduled or service workloads.

 

In future updates I will explore the step-by-step differences in setup for this methodology for users of SAS Viya, in addition to discussing the wider breadth of supported authentication options that Viya brings. Feel free to send any questions or comments to me at Joe.Cabral@sas.com. As always, thanks for reading, and for playing my new favorite writer’s game: “is this hyperlink for documentation or a 90’s music video?”

 

15 REPLIES 15
RGarrido
Fluorite | Level 6

Hi Joe,

Thanks for a very thorough set of instructions to set up the snowflake connector using ODBC and also the SNOW engines.  The challenge I am facing now is to try to hide the RSA password from my users (I am a SAS Admin).  If I use the SNOW engine and register the tables, the user can still right-click to the library in SAS Enteprise Guide and look at properties and see the libname statement for that library with the RSA key location and the password if encrypted.  If I use ODBC, the ODBC ini file becomes accessible to the user and will result in the same ending.

 

Is there a way of hiding the password for the RSA key and for either engines (ODBC and SNOW) to work?

Many thanks.

SASKiwi
PROC Star

@RGarrido  - The way we hide our Snowflake account and password details using SAS/ACCESS Interface to Snowflake is by creating an Authentication Domain via SAS Management Console. We then create a Snowflake user group via User Manager then add the Snowflake account and password in the Accounts tab like so:

SASKiwi_0-1762217658995.png

Then in the Snowflake Server connection definition in SMC, the Snowflake Authentication domain is used:

SASKiwi_1-1762218117759.png

You can then create Data Library definitions in SMC that use the Snowflake server and connection definitions. 

RGarrido
Fluorite | Level 6

@SASKiwi - thank you for your response.  We use authdomain to hide username and password when creating SAS to Snowflake connection but with an RSA key, it won't work because to connect to snowflake using an RSA key uses the conopts command like so:  
LIBNAME TEST SNOW
SERVER="snowflakecomputing.com"
DATABASE='TESTONE'
WAREHOUSE='DEMO_ONE'
SCHEMA='TITLE'
ROLE='USER_ROLE'
USER='SASUSER_01'
CONOPTS="AUTHENTICATOR=SNOWFLAKE_JWT;
PRIV_KEY_FILE=/LOCATION/OF/KEY/rsa_key.p8;
PRIV_KEY_FILE_PWD=XXXXXXXXXXX;";

Adding an authdomain to hide the username and priv_key_file like so:
LIBNAME TEST SNOW
SERVER="snowflakecomputing.com"
DATABASE='TESTONE'
WAREHOUSE='DEMO_ONE'
SCHEMA='TITLE'
ROLE='USER_ROLE'
CONOPTS="AUTHENTICATOR=SNOWFLAKE_JWT;
PRIV_KEY_FILE=/LOCATION/OF/KEY/rsa_key.p8;"
AUTHDOMAIN=USER_AUTHDOMAIN;
The password inside the AUTHDOMAIN does not get passed to the authenticator (snowflake_jwt).  

JCabralSAS
SAS Employee

Hi @RGarrido - if I'm understanding your situation properly, there are a few options that might work here. I have yet to test these myself to confirm, but my initial ideas are as follows:

  1. Use PROC PWENCODE to hide the values of PRIV_KEY_FILE_PWD. I do have a concern about whether the encoded password would propagate within the CONOPTS, although my read of documentation suggests it should.
  2. Use macro variables to store the value of the PRIV_KEY_FILE_PWD (and have that executed in the autoexec file). This can be combined with the PROC PWENCODE to further obfuscate the value. However, if the CONOPTS do not accept macro or encoded variables, you might have to consider using POSIX controls:
  3. If each user has their own PRIV_KEY_FILE, then you can (if permissible by your organization) use unencrypted key files stored in private directories that are limited in permissions to only the user to whom the key belongs. This can also be done with encrypted key files, but the PRIV_KEY_FILE_PWD would still be visible without the obfuscation of (1) or (2) as listed above.

Let me know if you have any success with these options and if they meet your system's requirements. If not, feel free to schedule a call with me and I will update this thread with any conclusions we reach.

RGarrido
Fluorite | Level 6

@JCabralSAS - For the first two you have suggested, the pwencode will not work because the encoded password is framed inside the conopts command (java based).  Neither will a macro variable when called inside the conopts command.   So:
PRIV_KEY_FILE_PWD={SAS002}706634535C9F3B793CCCB39C4F553FA2;  <- will not be read by conopts nor
PRIV_KEY_FILE_PWD=&password.; 

I tried using ODBC engine and while it works, you need to chmod the permissions for the odbc.ini path for users to read the file which means that they can get to the password using a terminal and running a command such as: cat /etc/odbc.ini  and examine the odbc.ini file where the password is stored.

 

For the third option you suggested users having their own private key files, this will be difficult to control from a security stand point. Normally,  a username and password (snowflake account) is embedded in an authdomain and we assign AD groups to these authdomains. Similarly, if an RSA key is used, the key will be shared across an AD group else if each user has their own unique private key, these need to be mapped to the snowflake service account thru a jenkins pipeline job.  In the end, this means that one snowflake service account gets mapped to multiple service keys instead of just one.


JCabralSAS
SAS Employee

Hi @RGarrido - thanks for following up. Regarding the third option:

The current setup of Snowflake's security regarding RSA Key Pairs makes the assumption/expectation that every user has a separate and manually assigned Private Key Pair. While unfortunately tedious and a hassle to manage for large user bases, it is the only officially supported mechanism bridging SAS9 and Snowflake without triggering MFA push notifications.

 

One workaround might be to create an ODBC.ini file that contains DSN's for all service accounts and nothing else - essentially a long list .ini file containing the values for each service account's PRIV_KEY_FILE & PRIV_KEY_FILE_PWD. These, in turn, are linked to the respective Snowflake users that they operate as. Then, you can chmod that ODBC.ini file to only be readable by the users/groups/identities in the server that are assumed by the service users. This would prevent any of your human users from leveraging a terminal to print out the ODBC.ini file to examine its contents.

 

Now, this does mean you would need a separate ODBC.ini file that's readable for your human users. Since the ODBCINI environment variable is defined before the autoexec, the configuration would have to be made before launching SAS, which might pose a major problem that's beyond my knowledge. I'll keep thinking on ways that could allow linking multiple different ODBC.ini files, but I haven't seen that done before.

 

If you are able to leverage two separate ODBC.ini files at different times, then, all your users would have to do is submit the CONOPTS with the DSN name and the key file & password associated with their Snowflake human user. This assumes that every human user in SAS has a human user in Snowflake. If that's not the case, you will likely have human users with shared access to a Snowflake account & therefore shared access to its associated private key file & password. In the absence of the second file workaround, we can pass the workaround to the user (e.g. have them define everything except the driver about their Snowflake connection in their CONOPTS), though I've been desperately trying to avoid doing that.

 

If all else fails, you can try leveraging PAT's (Programmatic Access Tokens) for authentication, basically treating the PAT as a password in a LIBNAME statement, including in Auth Domains. I have seen some success here and the execution is pretty simple, BUT with the major caveat that it is NOT officially supported by SAS, so you may have to experiment with it.

 

I apologize for the inconvenience of these recommendations, but I do hope that this points you in the right direction.

RGarrido
Fluorite | Level 6

Thanks for this @JCabralSAS .  I have gone through the path you outlined and couldn't find a solution that would actually work.  The closest one would be to use the ODBC engine and mask the password and maybe the path to the private key file using some environmental variables.  As you mentioned there are a lot of challenges and there are other options like PAT.  user FreshStarter is also keen to find out what works best for this so I guess there are others out there facing these challenges.  

freshstarter
Quartz | Level 8
Hello @RGarrido

Just letting you know I have tried PAT approach before and I have onboarded the PAT token on SAS authdomain but the problem is PAT length is 222 char long and it seems Authdomain can accommodate only 128 characters length. I’m not sure whether we can able to overcome this issue on SAS.
Soundappan
Fluorite | Level 6

Hi @freshstarter 

Where you are you able to fix the issue? Can you share the Auth Domain setup config and where the PAT token was saved?

 

Will this odbc.ini config work with Auth domain setup?

ODBC.ini

[ODBC Data Sources]
Snowflake_pat = Snowflake ODBC Driver 64-bit

[Snowflake_pat]
Driver=/usr/lib64/snowflake/odbc/lib/libSnowflake.so
server=<xxx.snowflakecomputing.com>
role=<snowflake_consumer>
warehouse=<snowflake_wh>
database=<db_name>
authenticator=programmatic_access_token
 

SAS code

libname PATSNOW snow
  dsn = "Snowflake_pat"
  authdomain = "snowflake_pat_auth"
  shema = <schema>
;

proc sql;
  connect using PATSNOW as snow;
  execute(
   create or replace table &db_primary..&schema_temp..sample_table as
     select *
     from &db_primary..&schema_temp..tmp_table
     limit 100
    ;
  ) by snow;
 disconnect from snow;
quit;

libname PATSNOW clear;

 

Incase If we do not use auth domain, can we directly use the PAT token in the odbc.ini?

 

freshstarter
Quartz | Level 8

Hi @Soundappan 

 

I have switched the authentication from PAT to a key pair, and it works for me. As I mentioned earlier, the PAT token cannot be added to the AuthDomain in the SAS Management Console due to a length constraint. The PAT token is 222 characters long, while the AuthDomain allows a maximum of 128 characters.

 

However, I recall that when we hardcoded the PAT in the LIBNAME statement using the PWD parameter, it worked.

 

Similarly, if we hardcode the same PAT in the odbc.ini file under the Password parameter, it will work  as well.

SASKiwi
PROC Star

@RGarrido - Completely understand that using a CONOPTS mechanism isn't going to work with AUTHDOMAIN. Needs to be via a SAS-supported option.

freshstarter
Quartz | Level 8

Thank you, @JCabralSAS , for your post.it was very useful.

 

We are planning to switch from password-based authentication to key-pair authentication for SAS to Snowflake. Our current requirement is:

     1. Overnight batch/service account: key-pair authentication

     2.User authentication: normal password authentication with DUO MFA

We are using SAS 9.4M7 on a RHEL server.

 

I have created two DSNs in the odbc.ini file:

    1.Password-based DSN: without the authenticator parameter

    2.Key-pair DSN: with authenticator=snowflake_jwt and the PRIVATE_KEY_FILE and PRIVATE_KEY_FILE_PWD parameters

 

Using the key-pair DSN, I can successfully run the LIBNAME statement from SAS Enterprise Guide and connect to Snowflake. Also I can able to define a SAS pre-assigned library for key-pair authentication and I can successfully able to use that.

 

Challenge

Regarding the Private_key_file path and passphrase password on odbc.ini file

 

Questions:

1. I will not able to pass PRIV_KEY_FILE and PRIV_KEY_FILE_PWD directly in the odbc.ini file due to security concern. I noticed your Windows example included these parameters—can this approach be applied to Linux ODBC drivers?

2.If it is possible, would these parameters need to be DSN-specific? (We have one DSN for password authentication and another for key-pair authentication.)

 

I also attempted to set environment variables as follows, but it did not work:

export PRIV_KEY_FILE
export PRIV_KEY_FILE_PWD


Could you please advise on the recommended approach to overcome these issues?

 

Thank you for your time and guidance.

JCabralSAS
SAS Employee

Hi @freshstarter, thank you for reading. I'm glad it helped!

Security of the PRIV_KEY_FILE & PRIV_KEY_FILE_PWD content has been a tricky thorn in my side, especially since I'm more of a Snowflake user-admin than a security guru. BUT let's see what we can do.

  1. The Windows approach CAN be parallelized to Linux ODBC. What I would recommend, given your service users are the only ones using key-pair authentication, is to define the DSN with everything except the Private Key Pair path and password. However you store those files & passwords and secure them is up to you, with the only requirement that each Private Key FILE is readable by the identity assumed by the service account in the Linux system. So long as that holds, as your workforce writes jobs for the service accounts, they can just reference the file location and password directly in the LIBNAME statement via CONOPTS. The human element is that you have to make sure that the writers of these batch jobs are provided with the locations & passwords (whether you keep that information in a key-vault, on a protected doc somewhere, whatever your security practices permit) for ONLY the service accounts that they are permitted to use.
  2. I'm not sure which parameters you're referring to here - the PRIV_KEY_FILE & PRIV_KEY_FILE_PWD CAN be used in multiple DSN's with the same values - as long as the Snowflake identity that SAS assumes is linked to said Private Key. As for the other parameters like warehouse, database, schema, etc. - you can mix and match them across multiple DSN's as much as you like. Users can ever overwrite them in their LIBNAME statements, which is convenient because it allows you to write the ODBC.ini with standard or placeholder values for database & schema (i.e. some public <db>.<schema> with a public role that really has no distinct permissions), and then the users can tailor their connection to the workloads they're submitting.

As for the export issue, as another user mentioned, the export command for setting Linux environment variables doesn't automatically propagate into the SAS system. There's a quick how-to on SAS's Customer Support Center plus some documentation here that dives well beyond my knowledge about setting and ingesting environment variables into SAS 9. I hope this helps!

RGarrido
Fluorite | Level 6

Here is a step forward to coding the libname for SAS to Snowflake Connector using an RSA key + Passphrase.  Replace USER with AUTHDOMAIN.
LIBNAME TEST SNOW
SERVER="snowflakecomputing.com"
DATABASE='TESTONE'
WAREHOUSE='DEMO_ONE'
SCHEMA='TITLE'
ROLE='USER_ROLE'
USER='SASUSER_01'
CONOPTS="AUTHENTICATOR=SNOWFLAKE_JWT;
PRIV_KEY_FILE=/LOCATION/OF/KEY/rsa_key.p8;
PRIV_KEY_FILE_PWD=XXXXXXXXXXX;";

 

With the above code, there is no filtering on the part of the users that can run the code.  Those with SAS access can run the code and see the tables. What I have tested is to add the AUTHDOMAIN into the code and remove the USER line.  This will do three things:

1.  Filter the right users to have access to the connection.  Only those groups who are given access with the AUTHDOMAIN are able to run the code.
2.  In SAS EG, right clicking to the library use to give full information about the libname.  Now, those details are hidden including the USER ID.

3. USERID is hidden from user.  This is an important component for logging to snowflake. Without a userid, a user cannot no login even with an RSA key and a passphrase.

The code that I ran is like the one below.  NO USER line and uses AUTHDOMAIN:

LIBNAME TEST SNOW
SERVER="snowflakecomputing.com"
DATABASE='TESTONE'
WAREHOUSE='DEMO_ONE'
SCHEMA='TITLE'
ROLE='USER_ROLE'
AUTHDOMAIN='TEST_RSA_CONNECT'
CONOPTS="AUTHENTICATOR=SNOWFLAKE_JWT;
PRIV_KEY_FILE=/LOCATION/OF/KEY/rsa_key.p8;
PRIV_KEY_FILE_PWD=XXXXXXXXXXX;";

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

Get Started with SAS Information Catalog in SAS Viya

Learn how to explore data assets, create new data discovery agents, schedule data discovery agents, and much more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 15 replies
  • 3931 views
  • 15 likes
  • 5 in conversation