This is the fourth article in the series looking at Kerberos delegation with the 2020.1.4 release of SAS Viya. In previous posts we’ve looked at an overview of Kerberos delegation, the process flow for unconstrained delegation, and the process flow for constrained delegation. In this post we will examine the configuration of Kerberos delegation. This will cover both constrained and unconstrained delegation. We will leave SAS/CONNECT and SAS/ACCESS to Hadoop specifics for later posts.
First and most importantly, all of the configuration is completed using the kustomization.yaml; there is no configuration completed using SAS Environment Manager or the SAS Viya CLI. In fact, there is nothing you will be able to see within SAS Environment Manager or through the SAS Viya CLI that shows the environment is configured for Kerberos delegation. So, as part of this article we will also look at using kubectl to examine the running configuration.
As with any Kerberos configuration if the prerequisites are not completed then the configuration will never work. In this section we will focus on testing the prerequisites to ensure they have been completed correctly. We will focus on the most common use-case of using Active Directory as the KDC. We will also assume that testing the prerequisites occurs on Linux, since this will likely occur on the host where you will run the kubectl commands from. This host does not need to be integrated with the KDC, but will we require some tools to enable the testing of prerequisites. Specifically, the klist, kinit, and kvno tools that we will use in this section are part of the Kerberos client package on Linux. These can be installed with the following command [on RHEL, CentOS, or Oracle Linux]:
sudo yum install krb5-workstation
or [on SLES]:
sudo zypper install krb5-client
In the earlier overview article we addressed the requirements for Kerberos delegation. We require:
The contents of the Kerberos keytab file can be reviewed using the klist tool, as shown here:
klist -ket http.keytab
You should see something like the following:
Keytab name: FILE:/home/user/http.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
2 12/10/2020 05:15:54 HTTP/IngressHostname@CUSTOMER.COM (arcfour-hmac)
2 12/10/2020 05:15:54 HTTP/IngressHostname@CUSTOMER.COM (aes128-cts-hmac-sha1-96)
2 12/10/2020 05:15:54 HTTP/IngressHostname@CUSTOMER.COM (aes256-cts-hmac-sha1-96)
This shows that our Kerberos keytab contains three long-term keys for the HTTP principal with different encryption types. The Service Principal Name (SPN) of the HTTP principal is based on the A-RECORD DNS entry for the Ingress Controller.
Since we require that UPN=SPN we can use the Kerberos keytab to initialize a Kerberos credential for the HTTP principal. This will confirm that the long-term keys match what is in Active Directory. It also allows us to confirm the basics of the Kerberos configuration file. You can use the following commands to validate the Kerberos keytab can be used to authenticate to Active Directory:
KRB5_CONFIG=krb5.conf \
KRB5_TRACE=/dev/stdout \
kinit -kt http.keytab HTTP/IngressHostname@CUSTOMER.COM; \
kdestroy
You should see something like the following:
[2127] 1607599046.699112: Getting initial credentials for HTTP/IngressHostname@CUSTOMER.COM
[2127] 1607599046.699113: Looked up etypes in keytab: rc4-hmac, aes128-cts, aes256-cts
[2127] 1607599046.699115: Sending unauthenticated request
[2127] 1607599046.699116: Sending request (201 bytes) to CUSTOMER.COM
[2127] 1607599046.699117: Resolving hostname ad1.customer.com
[2127] 1607599046.699118: Sending initial UDP request to dgram 10.96.13.114:88
[2127] 1607599046.699119: Received answer (209 bytes) from dgram 10.96.13.114:88
[2127] 1607599046.699120: Response was not from master KDC
[2127] 1607599046.699121: Received error from KDC: -1765328359/Additional pre-authentication required
[2127] 1607599046.699124: Preauthenticating using KDC method data
[2127] 1607599046.699125: Processing preauth types: PA-PK-AS-REQ (16), PA-PK-AS-REP_OLD (15), PA-ETYPE-INFO2 (19), PA-ENC-TIMESTAMP (2)
[2127] 1607599046.699126: Selected etype info: etype aes256-cts, salt "CUSTOMER.COMHTTPIngressHostname", params ""
[2127] 1607599046.699127: Retrieving HTTP/IngressHostname@CUSTOMER.COM from FILE:http.keytab (vno 0, enctype aes256-cts) with result: 0/Success
[2127] 1607599046.699128: AS key obtained for encrypted timestamp: aes256-cts/43C6
[2127] 1607599046.699130: Encrypted timestamp (for 1607599046.517253): plain 301AA011180F32303230313231303131313732365AA105020307E485, encrypted D9054DEFB273FFB4365BA7380508CD06BBF70BC44AE9B308D2B565F34D7F86D23F6FD54682DBB662433865A7D8B742BB453AD63AE4013144
[2127] 1607599046.699131: Preauth module encrypted_timestamp (2) (real) returned: 0/Success
[2127] 1607599046.699132: Produced preauth for next request: PA-ENC-TIMESTAMP (2)
[2127] 1607599046.699133: Sending request (281 bytes) to CUSTOMER.COM
[2127] 1607599046.699134: Resolving hostname ad1.customer.com
[2127] 1607599046.699135: Sending initial UDP request to dgram 10.96.13.114:88
[2127] 1607599046.699136: Received answer (92 bytes) from dgram 10.96.13.114:88
[2127] 1607599046.699137: Response was not from master KDC
[2127] 1607599046.699138: Received error from KDC: -1765328332/Response too big for UDP, retry with TCP
[2127] 1607599046.699139: Request or response is too big for UDP; retrying with TCP
[2127] 1607599046.699140: Sending request (281 bytes) to CUSTOMER.COM (tcp only)
[2127] 1607599046.699141: Resolving hostname ad1.customer.com
[2127] 1607599046.699142: Initiating TCP connection to stream 10.96.13.114:88
[2127] 1607599046.699143: Sending TCP request to stream 10.96.13.114:88
[2127] 1607599046.699144: Received answer (1722 bytes) from stream 10.96.13.114:88
[2127] 1607599046.699145: Terminating TCP connection to stream 10.96.13.114:88
[2127] 1607599046.699146: Response was not from master KDC
[2127] 1607599046.699147: Processing preauth types: PA-ETYPE-INFO2 (19)
[2127] 1607599046.699148: Selected etype info: etype aes256-cts, salt "CUSTOMER.COMHTTPIngressHostname", params ""
[2127] 1607599046.699149: Produced preauth for next request: (empty)
[2127] 1607599046.699150: AS key determined by preauth: aes256-cts/43C6
[2127] 1607599046.699151: Decrypted AS reply; session key is: aes256-cts/7175
[2127] 1607599046.699152: FAST negotiation: unavailable
[2127] 1607599046.699153: Initializing FILE:/tmp/krb5cc_1000 with default princ HTTP/IngressHostname@CUSTOMER.COM
[2127] 1607599046.699154: Storing HTTP/IngressHostname@CUSTOMER.COM -> krbtgt/CUSTOMER.COM@CUSTOMER.COM in FILE:/tmp/krb5cc_1000
[2127] 1607599046.699155: Storing config in FILE:/tmp/krb5cc_1000 for krbtgt/CUSTOMER.COM@CUSTOMER.COM: pa_type: 2
[2127] 1607599046.699156: Storing HTTP/IngressHostname@CUSTOMER.COM -> krb5_ccache_conf_data/pa_type/krbtgt\/CUSTOMER.COM\@CUSTOMER.COM@X-CACHECONF: in FILE:/tmp/krb5cc_1000
This output is generated because we set KRB5_TRACE=/dev/stdout. This shows the successful authentication to ad1.customer.com for the userPrincipalName = HTTP/IngressHostname@CUSTOMER.COM.
This test works because the service account we are using has User Principal Name (UPN) = Service Principal Name (SPN), which is what SAS Viya requires for the HTTP principal.
Finally, we can test that delegation is setup correctly for the HTTP principal. These steps are different if we are testing unconstrained delegation compared to constrained delegation. For unconstrained delegation you can complete the following:
Authenticate to AD as an end-user:
KRB5_CONFIG=krb5.conf \
kinit user@CUSTOMER.COM
Request a service ticket for the HTTP principal:
KRB5_CONFIG=krb5.conf \
kvno HTTP/IngressHostname@CUSTOMER.COM
You should see something like:
HTTP/IngressHostname@CUSTOMER.COM: kvno = 2
Inspect the Kerberos ticket cache:
klist -f
You should see something like:
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: user@CUSTOMER.COM
Valid starting Expires Service principal
04/06/2021 03:08:02 04/06/2021 13:08:02 krbtgt/CUSTOMER.COM@CUSTOMER.COM
renew until 04/07/2021 03:07:58, Flags: FRIA
04/06/2021 03:10:46 04/06/2021 13:08:02 HTTP/IngressHostname@CUSTOMER.COM
Flags: FAO
The flags on the HTTP service ticket are FAO and this mean:
So, we can see that the service ticket obtained for the HTTP principal is valid for unconstrained delegation since it has the Okay as delegate flag.
To validate constrained delegation settings for the HTTP principal you will need to know the SPN of one of the data sources the HTTP principal has been trusted to delegate to. You can then use the following steps to validate, here we test with the SPN for Microsoft SQL Server as an example:
Use the following command to authenticate to AD as the HTTP Principal:
KRB5_CONFIG=krb5.conf \
kinit -kt http.keytab HTTP/IngressHostname@CUSTOMER.COM
Use the following command to request a service ticket for the Microsoft SQL Server principal using constrained delegation:
KRB5_CONFIG=krb5.conf \
kvno -k http.keytab \
-U user@CUSTOMER.COM -P MSSQLSvc/mssql.server.customer.com@CUSTOMER.COM
You should see something like the following:
MSSQLSvc/mssql.server.customer.com@CUSTOMER.COM: kvno = 2, keytab entry valid
Use the following command to inspect the Kerberos ticket cache:
klist -f
You should see something like the following:
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: HTTP/IngressHostname@CUSTOMER.COM
Valid starting Expires Service principal
04/08/2021 02:46:22 04/08/2021 12:46:22 krbtgt/CUSTOMER.COM@CUSTOMER.COM
renew until 04/09/2021 02:46:22, Flags: FRIA
04/08/2021 02:47:02 04/08/2021 12:46:22 HTTP/IngressHostname@CUSTOMER.COM
for client user\@CUSTOMER.COM@CUSTOMER.COM, Flags: FA
04/08/2021 02:47:03 04/08/2021 12:46:22 MSSQLSvc/mssql.server.customer.com@CUSTOMER.COM
for client user\@CUSTOMER.COM@CUSTOMER.COM, Flags: FA
The flags on the HTTP service ticket are FA and this mean:
So, we can see that the HTTP Principal is correctly configured for Kerberos Constrained Delegation to the MS SQL Server Principal. You can see that the two service tickets in the ticket cache have been obtained for the end-user. Also, since we did not need to provide any credentials for the end-user you can see that S4U2SELF is working correctly for the HTTP Principal.
This completes validating the prerequisites.
The configuration of SAS Viya for Kerberos delegation is covered in the README.md file located in sas-bases/examples/kerberos/sas-servers/. At a high-level the steps are:
Here we will present some example commands you can use to complete this configuration. First, you can use the following commands to copy all the files into place for:
PROJECT_DIR=~/project/deploy/viya; \
mkdir -p ${PROJECT_DIR}/site-config/kerberos; \
mkdir -p ${PROJECT_DIR}/site-config/kerberos/http
cp http.keytab ${PROJECT_DIR}/site-config/kerberos/http/keytab
cp krb5.conf ${PROJECT_DIR}/site-config/kerberos/http/krb5.conf
cp ${PROJECT_DIR}/sas-bases/examples/kerberos/http/* ${PROJECT_DIR}/site-config/kerberos/http/
cp -r ${PROJECT_DIR}/sas-bases/examples/kerberos/sas-servers ${PROJECT_DIR}/site-config/kerberos
cp -r ${PROJECT_DIR}/sas-bases/examples/kerberos/cas-server ${PROJECT_DIR}/site-config/kerberos
cp ${PROJECT_DIR}/site-config/kerberos/http/keytab ${PROJECT_DIR}/site-config/kerberos/sas-servers/keytab
cp ${PROJECT_DIR}/site-config/kerberos/http/krb5.conf ${PROJECT_DIR}/site-config/kerberos/sas-servers/krb5.conf
cp ${PROJECT_DIR}/site-config/kerberos/http/keytab ${PROJECT_DIR}/site-config/kerberos/cas-server/keytab
cp ${PROJECT_DIR}/site-config/kerberos/http/krb5.conf ${PROJECT_DIR}/site-config/kerberos/cas-server/krb5.conf
sudo chmod -R u+w ${PROJECT_DIR}/site-config/kerberos
You can then use the following command to inspect the file system contents you have created:
tree ${PROJECT_DIR}/site-config/kerberos/
You should see something like the following:
project/deploy/viya/site-config/kerberos/
├── cas-server
│ ├── configmaps.yaml
│ ├── keytab
│ ├── krb5.conf
│ ├── kustomization.yaml
│ └── secrets.yaml
├── http
│ ├── configmaps.yaml
│ ├── keytab
│ ├── krb5.conf
│ ├── kustomization.yaml
│ ├── README.md
│ └── secrets.yaml
└── sas-servers
├── configmaps.yaml
├── keytab
├── krb5.conf
├── kustomization.yaml
├── README.md
└── secrets.yaml
3 directories, 17 files
You could then use the following commands to replace the required placeholders in different files:
FOR UNCONSTRAINED DELEGATION:
PROJECT_DIR=~/project/deploy/viya
SPN="HTTP/IngressHostname@CUSTOMER.COM"; \
sed -i "s&{{ PRINCIPAL-NAME-IN-KEYTAB }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/http/configmaps.yaml
sed -i "s&{{ SPN }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/http/configmaps.yaml
sed -i 's/false/true/g' ${PROJECT_DIR}/site-config/kerberos/http/configmaps.yaml
SPN="HTTP/IngressHostname"; \
sed -i "s&{{ SPN }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/sas-servers/configmaps.yaml
sed -i "s&info&debug&g" ${PROJECT_DIR}/site-config/kerberos/sas-servers/configmaps.yaml
sed -i "s&{{ SPN }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/cas-server/configmaps.yaml
sed -i "s&{{ HTTP_SPN }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/cas-server/configmaps.yaml
FOR CONSTRAINED DELEGATION:
PROJECT_DIR=~/project/deploy/viya
SPN="HTTP/IngressHostname@CUSTOMER.COM"; \
sed -i "s&{{ PRINCIPAL-NAME-IN-KEYTAB }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/http/configmaps.yaml
sed -i "s&{{ SPN }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/http/configmaps.yaml
sed -i 's/false/true/g' ${PROJECT_DIR}/site-config/kerberos/http/configmaps.yaml
sed -i 's/HOLDONTOGSSCONTEXT=true/HOLDONTOGSSCONTEXT=false/g' ${PROJECT_DIR}/site-config/kerberos/http/configmaps.yaml
SPN="HTTP/IngressHostname"; \
sed -i "s&{{ SPN }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/sas-servers/configmaps.yaml
sed -i "s&info&debug&g" ${PROJECT_DIR}/site-config/kerberos/sas-servers/configmaps.yaml
sed -i '/KRB5PROXY_LOG_TYPE/i - SAS_CONSTRAINED_DELEG_ENABLED="true"' ${PROJECT_DIR}/site-config/kerberos/sas-servers/configmaps.yaml
sed -i "s&{{ SPN }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/cas-server/configmaps.yaml
sed -i "s&{{ HTTP_SPN }}&${SPN}&g" ${PROJECT_DIR}/site-config/kerberos/cas-server/configmaps.yaml
Now that you have all the configuration in-place you can update the kustomization.yaml to include the references to the new files. For example, you could use the following commands to update the kustomization.yaml:
PROJECT_DIR=~/project/deploy/viya
sed -i '/kerberos/d' ${PROJECT_DIR}/kustomization.yaml
sed -i '/configurations:$/i \ \ - site-config\/kerberos\/http' ${PROJECT_DIR}/kustomization.yaml
sed -i '/configurations:$/i \ \ - site-config/kerberos/sas-servers' ${PROJECT_DIR}/kustomization.yaml
sed -i '/configurations:$/i \ \ - site-config/kerberos/cas-server' ${PROJECT_DIR}/kustomization.yaml
sed -i '/required\/transformers.yaml$/a \ \ - sas-bases/overlays/kerberos/http/transformers.yaml' ${PROJECT_DIR}/kustomization.yaml
sed -i '/required\/transformers.yaml$/i \ \ - sas-bases/overlays/kerberos/sas-servers/sas-kerberos-job-tls.yaml' ${PROJECT_DIR}/kustomization.yaml
sed -i '/required\/transformers.yaml$/i \ \ - sas-bases/overlays/kerberos/sas-servers/sas-kerberos-deployment-tls.yaml' ${PROJECT_DIR}/kustomization.yaml
sed -i '/required\/transformers.yaml$/i \ \ - sas-bases/overlays/kerberos/sas-servers/cas-kerberos-tls-transformer.yaml' ${PROJECT_DIR}/kustomization.yaml
sed -i '/required\/transformers.yaml$/i \ \ - sas-bases/overlays/kerberos/sas-servers/cas-kerberos-direct.yaml' ${PROJECT_DIR}/kustomization.yaml
This has done the following:
You can then build the new site.yaml and apply it. Remember you will need to restart SAS Cloud Analytic Services for the changes to be picked up.
Now that the updated site.yaml has been applied and SAS Cloud Analytics Services restarted you can confirm the Kubernetes objects exist as expected. Remember the configuration settings we have applied are not visible in SAS Environment Manager or through the SAS Viya CLI.
Use the following commands to check the configmap for Kerberos used by SAS Logon Manager:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} describe configmap `kubectl -n ${NS} get pod -l app=sas-logon-app \
-o=jsonpath='{.items[*].spec.containers[0].envFrom[*].configMapRef.name}'|grep -E -o 'sas-kerberos-config-.*'`
You should see something like the following:
Name: sas-kerberos-config-hctbd4b8f9
Namespace: {{ SAS VIYA NAMESPACE }}
Labels: sas.com/admin=cluster-local
sas.com/deployment=sas-viya
Annotations:
Data
====
JAVA_OPTION_KRB5_CONF:
----
-Djava.security.krb5.conf=/opt/kerberos/krb5.conf
SAS_LOGON_KERBEROS_HOLDONTOGSSCONTEXT:
----
true
SPRING_PROFILES_ACTIVE:
----
ldap,postgresql,kerberos
SAS_LOGON_KERBEROS_SPN:
----
HTTP/IngressHostname@CUSTOMER.COM
JAVA_OPTION_JGSS_DEBUG:
----
-Dsun.security.jgss.debug=true
JAVA_OPTION_KRB5_DEBUG:
----
-Dsun.security.krb5.debug=true
SAS_LOGON_KERBEROS_DEBUG:
----
true
SAS_LOGON_KERBEROS_DISABLEDELEGATIONWARNING:
----
true
SAS_LOGON_KERBEROS_KEYTABLOCATION:
----
file:///opt/kerberos/keytab
SAS_LOGON_KERBEROS_SERVICEPRINCIPAL:
----
HTTP/IngressHostname@CUSTOMER.COM
Events: <none>
This shows us that the configmap has been applied to the SAS Logon Manager pod and the content of the configmap is what we expect it to be.
Use the following commands to check podTemplates now include the sas-krb5-proxy side-car container:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} get podtemplates -o=custom-columns='NAME:.metadata.name,CONTAINERS:.template.spec.containers[*].name'
You should see something like the following:
NAME CONTAINERS
sas-batch-pod-template sas-programming-environment,sas-krb5-proxy
sas-compute-job-config sas-programming-environment,sas-krb5-proxy
sas-connect-pod-template sas-programming-environment,sas-krb5-proxy
sas-launcher-job-config sas-programming-environment,sas-krb5-proxy
sas-qkb-bootstrap sas-qkb-bootstrap
So, we can see that the sas-krb5-proxy side-car container is included.
Use the following commands to check the configmaps used by the podTemplates:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} get podtemplates -o=custom-columns='NAME:.metadata.name,CONTAINER:.template.spec.containers[1].name,CONFIGMAPS:.template.spec.containers[1].envFrom[*].configMapRef.name'
You should see something like the following:
NAME CONTAINER CONFIGMAPS
sas-batch-pod-template sas-krb5-proxy sas-servers-kerberos-config-c857d9mhm9,sas-go-config-kgd978fkh4,sas-shared-config-42d7dk5684,sas-tls-config-99f79t29t5
sas-compute-job-config sas-krb5-proxy sas-servers-kerberos-config-c857d9mhm9,sas-go-config-kgd978fkh4,sas-shared-config-42d7dk5684,sas-tls-config-99f79t29t5
sas-connect-pod-template sas-krb5-proxy sas-servers-kerberos-config-c857d9mhm9,sas-go-config-kgd978fkh4,sas-shared-config-42d7dk5684,sas-tls-config-99f79t29t5
sas-launcher-job-config sas-krb5-proxy sas-servers-kerberos-config-c857d9mhm9,sas-go-config-kgd978fkh4,sas-shared-config-42d7dk5684,sas-tls-config-99f79t29t5
error: array index out of bounds: index 1, length 1
This shows that the sas-krb5-proxy side-car container is using the sas-servers-kerberos-config-######### configmap.
Use the following commands to check the values in the configmap used by the podTemplates:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} describe configmap \
`kubectl -n ${NS} get podtemplates sas-batch-pod-template -o=jsonpath='{.template.spec.containers[1].envFrom[0].configMapRef.name}'`
You should see something like the following:
Name: sas-servers-kerberos-config-c857d9mhm9
Namespace: {{ SAS VIYA NAMESPACE }}
Labels: sas.com/admin=cluster-local
sas.com/deployment=sas-viya
Annotations:
Data
====
KRB5PROXY_LOG_TYPE:
----
debug
KRB5_CONFIG:
----
/opt/kerberos/krb5.conf
KRB5_KTNAME:
----
/opt/kerberos/keytab
SAS_KERBEROS_ENABLED:
----
true
SAS_KRB5_PROXY_OUTPUT_PATH:
----
/tmp/
SAS_KRB5_PROXY_SPN:
----
HTTP/IngressHostname
Events: <none>
This nested kubectl command has fetched the configmap name from the sas-batch-job-pod-template for the second container and first configmap. Then performed a describe on that configmap name. This shows the content we put in the site-config/kerberos/sas-servers/configmaps.yaml has been correctly loaded.
Use the following commands to check the containers included in the CAS Controller pod:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} get CASDeployments -o=custom-columns='NAME:.metadata.name,CONTAINERS:.spec.controllerTemplate.spec.containers[*].name'
You should see something like the following:
NAME CONTAINERS
default cas,sas-backup-agent,sas-consul-agent,sas-krb5-proxy
This shows that the sas-krb5-proxy side-car container is included in the CAS Controller pod.
Use the following commands to check the configmaps used by the sas-krb5-proxy side-car container in the CAS Controller pod:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} get CASDeployments -o=custom-columns='NAME:.metadata.name,CONTAINERS:.spec.controllerTemplate.spec.containers[3].name,CONFIGMAPS:.spec.controllerTemplate.spec.containers[3].envFrom[*].configMapRef.name'
You should see something like the following:
NAME CONTAINERS CONFIGMAPS
default sas-krb5-proxy sas-servers-kerberos-config-c857d9mhm9,sas-go-config-kgd978fkh4,sas-shared-config-42d7dk5684,sas-tls-config-99f79t29t5
So, the sas-krb5-proxy side-car container in the CAS Controller pod is using the same sas-server-kerberos-config-########## configmap as the SAS Compute Server pod templates.
Use the following commands to check the contents of the configmap used by the sas-krb5-proxy side-car container in the CAS Controller pod:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} describe configmap \
`kubectl -n ${NS} get CASDeployments -o jsonpath='{.items[*].spec.controllerTemplate.spec.containers[3].envFrom[0].configMapRef.name}'`
You should see something like the following:
Name: sas-servers-kerberos-config-c857d9mhm9
Namespace: {{ SAS VIYA NAMESPACE }}
Labels: sas.com/admin=cluster-local
sas.com/deployment=sas-viya
Annotations:
Data
====
KRB5_KTNAME:
----
/opt/kerberos/keytab
SAS_KERBEROS_ENABLED:
----
true
SAS_KRB5_PROXY_OUTPUT_PATH:
----
/tmp/
SAS_KRB5_PROXY_SPN:
----
HTTP/IngressHostname
KRB5PROXY_LOG_TYPE:
----
debug
KRB5_CONFIG:
----
/opt/kerberos/krb5.conf
Events: <none>
This nested kubectl command has fetched the configmap name from the CASDeployment for the fourth container and first configmap. Then performed a describe on that configmap name. This shows the content we put in the site-config/kerberos/sas-servers/configmaps.yaml has been correctly loaded.
This completes checking the configuration created from the sas-servers directory we added to the kustomization.yaml.
Finally, we will check the configuration for direct Kerberos connections to SAS Cloud Analytic Services. Use the following commands to check the containers included in the CAS Controller pod:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} get CASDeployments -o=custom-columns='NAME:.metadata.name,CONTAINERS:.spec.controllerTemplate.spec.containers[*].name'
You should see something like the following:
NAME CONTAINERS
default cas,sas-backup-agent,sas-consul-agent,sas-krb5-proxy
Notice that the cas container is the first container, so will be index=0 in our next command. Use the following commands to check CASDeployments now include the cas-server-kerberos-config-########## configmap on the cas container:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} get CASDeployments -o=custom-columns='NAME:.metadata.name,CONTAINERS:.spec.controllerTemplate.spec.containers[0].name,CONFIGMAPS:.spec.controllerTemplate.spec.containers[0].envFrom[*].configMapRef.name'
You should see something like the following:
NAME CONTAINERS CONFIGMAPS
default cas sas-shared-config-42d7dk5684,sas-java-config-t7t5thbgkd,sas-access-config-hmmdg9cckh,sas-cas-config-f24k2hg6d5,sas-deployment-metadata-k9d2tf8mmh,sas-restore-job-parameters-6dh8htc9fg,sas-tls-config-99f79t29t5,cas-server-kerberos-config-dmtdfh2kbm
This shows the configmap cas-server-kerberos-config-########## is the last configmap in the cas container. So, re-run the last command and only list the name of the last (index=-1) configmap:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} get CASDeployments -o=custom-columns='NAME:.metadata.name,CONTAINERS:.spec.controllerTemplate.spec.containers[0].name,CONFIGMAPS:.spec.controllerTemplate.spec.containers[0].envFrom[-1].configMapRef.name'
You should see something like the following:
NAME CONTAINERS CONFIGMAPS
default cas cas-server-kerberos-config-dmtdfh2kbm
Use the following commands to check the contents of the configmap used by the cas container in the CAS Controller pod:
NS={{ SAS VIYA NAMESPACE }}; \
kubectl -n ${NS} describe configmap \
`kubectl -n ${NS} get CASDeployments -o jsonpath='{.items[].spec.controllerTemplate.spec.containers[0].envFrom[-1].configMapRef.name}'`
You should see something like the following:
Name: cas-server-kerberos-config-dmtdfh2kbm
Namespace: {{ SAS VIYA NAMESPACE }}
Labels: sas.com/admin=cluster-local
sas.com/deployment=sas-viya
Annotations:
Data
====
CASUSEDEFAULTCACHE:
----
1
CAS_SERVER_PRINCIPAL:
----
HTTP/IngressHostname
KRB5_KTNAME:
----
/opt/kerberos/keytab
SAS_KRB5_PROXY_SPN:
----
HTTP/IngressHostname
Events: <none>
This matches the information we put into the site-config/kerberos/cas-server/configmaps.yaml file, which are the settings for direct Kerberos connections to CAS. This completes checking all of the configured items.
In this article we have presented ways you can validate the prerequisites for Kerberos delegation, either constrained or unconstrained. We have then presented some example commands you can use to setup all the required SAS Viya configuration files. Then once you have configured Kerberos delegation with the 2020.1.4 release of SAS Viya we have shown you some kubectl commands you can use to confirm the configuration has been applied correctly.
Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 16. Read more here about why you should contribute and what is in it for you!
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.