Recently I ran into some issues configuring direct Kerberos authentication with the SAS/CONNECT Spawner on SAS Viya. As such, I wanted to use this post to explain some of the different logging you can enable with SAS/CONNECT to help understand what is happening with Kerberos authentication.
Before we examine the logging configuration and information it can produce, we need to have a general understanding of what should be happening. When we have SAS/CONNECT Spawner configured for direct Kerberos authentication the following happens:
So, we can see depending on when our issue occurs, we might need to look at logging for the SAS/CONNECT Spawner or the SAS/CONNECT Server. If issues occur in launching the SAS/CONNECT Server, we need to look at the SAS/CONNECT Spawner logging. Whereas, if issues are in the generation of the Kerberos credentials for outbound connections from the SAS/CONNECT Sever we need to look at the logging for the server.
Most often issues will occur in the initial phase, when the external client authenticates to the SAS/CONNECT Spawner. As such, we’ll first look at the best way to enable additional logging with the SAS/CONNECT Spawner.
The SAS/CONNECT Spawner does ship with both a standard and trace level logging configuration file. However, the trace level logging configuration file within the shipped pod generates a very large amount of information. This could end up masking the messages we are concerned about.
Alternatively, we can configure the SAS/CONNECT Spawner to use a logging configuration loaded dynamically from SAS Configuration Server content. We can manage that configuration either with the SAS Viya CLI or SAS Environment Manager. In SAS Environment Manager the configuration is under sas.connect.spawner, and we have logconfig and startup_commands sets of configurations. The SAS Documentation covers updating this configuration to change the logging settings.
To summarize what is in the SAS Documentation; we will add "-logconfigloc /config/customlogconfig-spawner.xml" to the startup_commands adding it to the USERMODS line so that it looks like the following:
USERMODS="-NOSCRIPT -logconfigloc /config/customlogconfig-spawner.xml"
Then we can copy the logconfig.contents under sas.connect.spawner so that we can restore it after our debugging. Then we replace the logconfig.contents with the following:
<?xml version="1.0" encoding="UTF-8"?>
<logging:configuration xmlns:logging="http://www.sas.com/xml/logging/1.0/">
<!--- Console appender writing out JSON for Kubernetes based containers -->
<appender name="Console" class="ConsoleAppender">
<param name="ImmediateFlush" value="true"/>
<param name="SkipEmpty" value="true"/>
<layout type="Json">
<param name="Individual" value="true" />
<param name="version#" value="%S{eventModel.payload.version}" />
<param name="timeStamp" value="%d{LEMZone}" />
<param name="level" value="%S{eventModel.payload.level}" />
<param name="source" value="%S{OSENV.SAS_PAYLOAD_SOURCE|sas}" />
<param name="messageKey" value="%K" />
<param name="messageParameters{}" value="%S{eventModel.payload.parameters}" />
<param name="message" value="%m" />
<param name="properties.thread" value="%t" />
<param name="properties.caller" value="%F:%L" />
<param name="properties.logger" value="%c" />
<param name="properties.pod" value="%S{hostname}" />
</layout>
</appender>
<!-- Application message logger -->
<logger name="App">
<level value="Info"/>
</logger>
<logger name="Audit.Authentication">
<level value="Trace"/>
</logger>
<logger name="App.Connect.Spawner.Client">
<level value="Trace"/>
</logger>
<root>
<level value="Info"/>
<appender-ref ref="Console"/>
</root>
</logging:configuration>
The key parts of this logconfig are the two loggers we have set to "Trace". Those being "Audit.Authentication" and "App.Connect.Spawner.Client". This will provide us details of the authentication to the SAS/CONNECT Spawner and the authentication to SAS Logon Manager.
The SAS/CONNECT Spawner will need to be restarted to pickup the changes in the logging configuration.
For example, when connecting successfully with Kerberos to the SAS/CONNECT Spawner some messages we will see are:
TRACE 2025-11-20T16:16:07.117000+00:00 [sas]- authenticateClient: client wants SSPI logon
TRACE 2025-11-20T16:16:07.117000+00:00 [sas]- authenticateUserSSPIv2: Enter
TRACE 2025-11-20T16:16:07.118000+00:00 [sas]- authenticateUserSSPIv2Thread: Enter
DEBUG 2025-11-20T16:16:07.128000+00:00 [sas]- Using SAS_SERVICE_PRINCIPAL: SAS/######-######-rg.gelenable.sas.com
DEBUG 2025-11-20T16:16:07.349000+00:00 [sas]- Acquired server principal name: SAS/######-######-rg.gelenable.sas.com@GELENABLE.SAS.COM
DEBUG 2025-11-20T16:16:07.351000+00:00 [sas]- Client user name is sastest1@GELENABLE.SAS.COM
DEBUG 2025-11-20T16:16:07.351000+00:00 [sas]- Delegated client principal name: sastest1@GELENABLE.SAS.COM
DEBUG 2025-11-20T16:16:07.351000+00:00 [sas]- Deleg cred expires Fri Nov 21 01:47:51 2025
INFO 2025-11-20T16:16:07.352000+00:00 [sas]- Client connection 0x7fab9a8b3780's user "sastest1" from 172.173.76.174 (172.173.76.174) successfully authenticated.
DEBUG 2025-11-20T16:16:07.352000+00:00 [sas]- authenticateUserSSPIv2Thread: Calling TKMTRBLogon(sastest1) provider: SSPI
TRACE 2025-11-20T16:16:07.406000+00:00 [sas]- SecurityMakeSPN [ENTER] : svcClass: >HTTPsas-logon-app< outLen: 39
DEBUG 2025-11-20T16:16:07.407000+00:00 [sas]- Acquired credentials for: sastest1@GELENABLE.SAS.COM
DEBUG 2025-11-20T16:16:08.081000+00:00 [sas]- authenticateUserSSPIv2Thread: TKMTRBLogon(sastest1) successful. tokenL:0
TRACE 2025-11-20T16:16:09.487000+00:00 [sas]- serverHandshake: waiting for server to start and connect
Alternatively, when connecting successfully using a username and password to the SAS/CONNECT Spawner some messages we will see are:
DEBUG 2025-11-20T17:52:34.415000+00:00 [sas]- authenticateClient: performing authentication on OAuth token
TRACE 2025-11-20T17:52:34.415000+00:00 [sas]- authenticateUserOAuth: Enter, logonOnly=0
TRACE 2025-11-20T17:52:34.415000+00:00 [sas]- authenticateUserOAuth: using Kerberos Credential to get OAuth token
TRACE 2025-11-20T17:52:34.415000+00:00 [sas]- getKerberosCredential: Enter
TRACE 2025-11-20T17:52:34.415000+00:00 [sas]- getKerberosCredential: krb5-proxy URL: >http://127.0.0.1:55901/auth/HTTPsas-logon-app< outLen: 0
DEBUG 2025-11-20T17:52:34.779000+00:00 [sas]- Acquired credentials for: sastest2@GELENABLE.SAS.COM
TRACE 2025-11-20T17:52:35.664000+00:00 [sas]- authenticateUserOAuth: Exit, status=0 (0x0)
TRACE 2025-11-20T17:52:37.455000+00:00 [sas]- serverHandshake: waiting for server to start and connect
With both sets of sample logging there are other messages as part of the log, we have just selected the key information.
So, we can clearly see having those two loggers set to "Trace" will provide us sufficient information to troubleshoot authentication issues with the SAS/CONNECT Spawner itself.
If we move on from the SAS/CONNECT Spawner itself, there might be occasions where the issue is with the SAS Kerberos Proxy sidecar. The logging information from the SAS/CONNECT Spawner might point in this direction, or there might be issues with the Kerberos credentials cache on the launched SAS/CONNECT Server.
Enabling debug logging for the SAS Kerberos Proxy sidecar is a little more complex. The logging level is defined with an environment variable. This variable KRB5PROXY_LOG_TYPE is defined in the site-config/kerberos/sas-servers/configmaps.yaml file. This in turn is loaded into several config maps inside the Kubernetes namespace.
We could use the following command to fetch the name of the specific confg map used by the SAS/CONNECT Spawner:
kubectl -n $NS get pod -l app=sas-connect-spawner -o json|jq -r '.items[].spec.containers[] | select(.name| contains("sas-krb5-proxy")).envFrom[].configMapRef | select(.name| contains("sidecar-config"))[]'
Which would return something like:
sas-servers-kerberos-sidecar-config-dm5ckmd5h4
We could then just edit this single config map to update the logging level for the SAS Kerberos Proxy sidecar. Otherwise, we would need to edit the site-config/kerberos/sas-servers/configmaps.yaml file, then rebuild and reapply our site.yaml. This would ensure the logging level is changed in all the config maps used by the SAS Kerberos Proxy sidecar across all the pods it is included in.
In the case of using a username and password to authenticate, we have seen from the selected messages above; that the SAS Kerberos Proxy sidecar in the SAS/CONNECT Spawner pod is responsible for getting Kerberos credentials. Therefore, using the following command, we can review the SAS Kerberos Proxy sidecar logging for the SAS/CONNECT Spawner pod:
kubectl -n ${NS} logs -l app=sas-connect-spawner -c sas-krb5-proxy --tail -1|gel_log
This would output quite a large amount of information if trace level logging were enabled. Some example messages, when a successful username and password connection is made, are:
Debug: 2025/11/27 13:59:03 GSSAPI Debug: In AcquireCredWithKinit()
Trace: 2025/11/27 13:59:03 krbAuth: Authenticator.acquireCredWithKinit succeeded
Debug: 2025/11/27 13:59:03 krbAuth: Authenticator.found spn from credential HTTP/######-######-rg.gelenable.sas.com@GELENABLE.SAS.COM
Trace: 2025/11/27 13:59:03 krbAuth: Authenticator.l.spn(HTTP/######-######-rg.gelenable.sas.com@GELENABLE.SAS.COM) and l.realm(GELENABLE.SAS.COM) set
Trace: 2025/11/27 13:59:03 krbAuth: Authenticator.GetServerHandle Exit
Trace: 2025/11/27 13:59:03 krbAuth: NewAuthenticator Exit
Trace: 2025/11/27 13:59:03 oauth: Oauth.NewAuthenticator: Enter
Trace: 2025/11/27 13:59:03 oauth: Oauth.NewAuthenticator: Exit
Info: 2025/11/27 13:59:03 main: Starting HTTP server on 127.0.0.1:55901
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.Verify handler: checking for Authorization header:
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.Verify handler: found basic
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.getTokenFromBasic: enter
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.getTokenFromBasic: using UPN: sastest2
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.getTokenFromBasic: converted basic creds to Memory CredCache
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.getTokenFromBasic: exit
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.Verify handler: checking if service cred should be re-acquired
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.NewSessionData: Enter
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.NewSessionData: Exit
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.Verify handler: basic auth returned cred cache
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.Verify handler: add cache to session (may skip rest of authentication)
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.Verify handler: setting sessionCookie
Trace: 2025/11/27 14:21:09 krbAuth: Authenticator.Verify handler: session says auth complete, add to context
Debug: 2025/11/27 14:21:09 WriteCcacheHandler(1): found credential of type *krb.UserData
Debug: 2025/11/27 14:21:09 WriteCcacheHandler(1): Temporary output file is: /tmp/krb5_cc_sastest2_F4GrKQ
Info: 2025/11/27 14:21:09 GSSAPI Trace: Cred.StoreInto: enter
Info: 2025/11/27 14:21:09 GSSAPI Trace: Cred.StoreInto: exit
This snippet shows the logging from the startup of the SAS/CONNECT Spawner pod. The events occurring at approximately 13:59 are those startup messages. At startup the SAS Kerberos Proxy sidecar initializes its own Kerberos credentials using the Kerberos keytab it is provided with.
Then the messages occurring at 14.21 are the username and password authentication. We can see the message "converted basic creds to Memory CredCache", which tells us the SAS Kerberos Proxy sidecar has obtained the Kerberos credentials using the username and password.
Between the startup of the SAS/CONNECT Spawner and the authentication with username and password, a session was launched with direct Kerberos authentication. We can see from the SAS Kerberos Proxy sidecar logging there was nothing done by the sidecar for that connection.
The SAS Kerberos Proxy sidecar that is used for both the username and password and direct Kerberos connections is the one inside the launched SAS/CONNECT Server pod. We can use a command like the following to look at the logging for this SAS Kerberos Proxy sidecar:
kubectl -n ${NS} logs -l launcher.sas.com/username=sastest1 -c sas-krb5-proxy --tail -1|gel_log
Notice, that the label we are searching for contains the username of the user who has launched the SAS/CONNECT Server. If they are also running a SAS Viya compute session at the same time this selection will not work as there will be two pods with that label.
Again, we can review a selection of the messages produced by the SAS Kerberos Proxy sidecar at trace level.
Debug: 2025/11/27 14:15:39 GSSAPI Debug: In AcquireCredWithKinit()
Trace: 2025/11/27 14:15:40 krbAuth: Authenticator.acquireCredWithKinit succeeded
Debug: 2025/11/27 14:15:40 krbAuth: Authenticator.found spn from credential HTTP/######-######-rg.gelenable.sas.com@GELENABLE.SAS.COM
Trace: 2025/11/27 14:15:40 krbAuth: Authenticator.l.spn(HTTP/######-######-rg.gelenable.sas.com@GELENABLE.SAS.COM) and l.realm(GELENABLE.SAS.COM) set
Trace: 2025/11/27 14:15:40 krbAuth: Authenticator.GetServerHandle Exit
Trace: 2025/11/27 14:15:40 krbAuth: NewAuthenticator Exit
Trace: 2025/11/27 14:15:40 oauth: Oauth.NewAuthenticator: Enter
Trace: 2025/11/27 14:15:40 oauth: Oauth.NewAuthenticator: Exit
Info: 2025/11/27 14:15:40 main: Starting HTTP server on 127.0.0.1:55901
Trace: 2025/11/27 14:15:40 oauth: Oauth.Verify: Enter
Trace: 2025/11/27 14:15:40 oauth: Oauth.VerifyRequest: Enter
Trace: 2025/11/27 14:15:40 oauth: Oauth.finderFunc: Enter
Trace: 2025/11/27 14:15:40 oauth: Oauth.finderFunc: found Bearer token
Trace: 2025/11/27 14:15:40 oauth: Oauth.finderFunc: Exit
Trace: 2025/11/27 14:15:40 oauth: Oauth.VerifyRequest: Exit
Trace: 2025/11/27 14:15:40 oauth: Oauth.NewContext: Enter
Trace: 2025/11/27 14:15:40 oauth: Oauth.NewContext: creating context with UserData
Trace: 2025/11/27 14:15:40 oauth: Oauth.NewContext: Exit
Debug: 2025/11/27 14:15:40 WriteCcacheHandler(1): found credential of type *oauth.UserData
Debug: 2025/11/27 14:15:40 WriteCcacheHandler(1): parameters from query:
Trace: 2025/11/27 14:15:40 WriteCcacheHandler(1): trying constrained delegation
Trace: 2025/11/27 14:15:40 krbAuth: Authenticator.ProxyCredHandler: Enter
Trace: 2025/11/27 14:15:40 krbAuth: Authenticator.impersonateUser: Enter
Trace: 2025/11/27 14:15:40 oauth: Oauth.UserData.oauth.UserData.UPN: Enter
Trace: 2025/11/27 14:15:40 oauth: Oauth.UserData.oauth.UserData.UPN: origin='kerberos'
Trace: 2025/11/27 14:15:40 oauth: Oauth.UserData.oauth.UserData.UPN: Exit
Debug: 2025/11/27 14:15:40 krbAuth: Authenticator.impersonateUser: becoming upn=sastest1@GELENABLE.SAS.COM
Trace: 2025/11/27 14:15:40 krbAuth: Authenticator.impersonateUser: non-kerberos credential found
Info: 2025/11/27 14:15:40 GSSAPI Trace: *Lib.AcquireCredImpersonateName: enter
Info: 2025/11/27 14:15:40 GSSAPI Trace: *Lib.AcquireCredImpersonateName: exit
Debug: 2025/11/27 14:15:40 krbAuth: Authenticator.impersonateUser: gssapi.AcquireCredImpersonateName() success, now impersonating: sastest1@GELENABLE.SAS.COM
Trace: 2025/11/27 14:15:40 krbAuth: Authenticator.impersonateUser: Exit
Debug: 2025/11/27 14:15:40 WriteCcacheHandler(2): Temporary output file is: /tmp/krb5_cc_sastest1_b5q7lA
Info: 2025/11/27 14:15:40 GSSAPI Trace: Cred.StoreInto: enter
Info: 2025/11/27 14:15:40 GSSAPI Trace: Cred.StoreInto: exit
This logging shows the SAS Kerberos Proxy sidecar in the SAS/CONNECT Server pod establishing the Kerberos credentials for the end user. In our case since we have Kerberos constrained delegation enabled it is using protocol transition to impersonate the end user. If the HTTP principal was not correctly configured this would fail. If we inspect the Kerberos ticket cache from the launched SAS/CONNECT Server pod, we will see something like:
Default principal: sastest1@GELENABLE.SAS.COM
Valid starting Expires Service principal
11/27/2025 09:45:14 11/27/2025 19:45:14 HTTP/######-######-rg.gelenable.sas.com@GELENABLE.SAS.COM
renew until 12/04/2025 09:45:13
11/27/2025 09:45:14 11/27/2025 19:45:14 krbtgt/GELENABLE.SAS.COM@GELENABLE.SAS.COM
for client HTTP/######-######-rg.gelenable.sas.com@GELENABLE.SAS.COM, renew until 12/04/2025 09:45:13
When authenticated successfully, this shows that the HTTP principal is used to obtain the constrained delegation credentials for the end user.
In this post we’ve first reminded ourselves of the processing for direct Kerberos authentication to the SAS/CONNECT Spawner in SAS Viya. We have also taken notice of the different roles played by the SAS Kerberos Proxy sidecar running in different pods. Armed with this knowledge we have shown how to enable specific additional logging to better troubleshoot issues with authentication to the SAS/CONNECT Spawner.
If you want to explore this topic further, you can in the SAS® Viya® Advanced Authentication – Kerberos workshop on learn.sas.com.
Find more articles from SAS Global Enablement and Learning here.
April 27 – 30 | Gaylord Texan | Grapevine, Texas
Walk in ready to learn. Walk out ready to deliver. This is the data and AI conference you can't afford to miss.
Register now and save with the early bird rate—just $795!
The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.