In tightly controlled dark‑site or air-gapped environments, SAS Viya administrators often need a way to discover software updates and patches without giving the platform unrestricted internet access. The SAS Update Checker can solve this by checking for available updates through a proxy so that all outbound traffic is controlled and auditable.
In many organisations, Kubernetes worker nodes are not allowed to talk directly to the internet and must send web traffic via a proxy. Configuring SAS Update Checker to leverage the proxy allows for querying for updates in a safe way:
SAS provides a sample overlay in deployment assets to configure the Update Checker cronjob with some additional parameters to talk to a proxy server.
In my demo environment, I deployed a simple Tinyproxy container inside my Kubernetes cluster and exposed it via a ClusterIP Service, and then configured Update Checker to send all outbound requests through that service. This same pattern can be applied to more complex setups, where Tinyproxy would be replaced by a hardened or corporate proxy, which could be deployed anywhere.
In my case, the proxy service was exposed at http://tinyproxy.proxy-test.svc.cluster.local:3128.
I could then verify connectivity with:
kubectl run nettest --rm -it \
--image=curlimages/curl \
--restart=Never -- sh
curl -v -x http://tinyproxy.proxy-test.svc.cluster.local:3128 https://cr.sas.com/
The output shows a CONNECT cr.sas.com:443 followed by a normal TLS handshake and HTTP 301/200 response, suggesting that my proxy is reachable and can access SAS endpoints from inside the cluster.
* Host tinyproxy.proxy-test.svc.cluster.local:3128 was resolved.
* IPv6: (none)
* IPv4: xx.xx.xx.xxx
* Trying xx.xx.xx.xxx:3128...
* CONNECT: no ALPN negotiated
* allocate connect buffer
* Establish HTTP proxy tunnel to cr.sas.com:443
> CONNECT cr.sas.com:443 HTTP/1.1
> Host: cr.sas.com:443
> User-Agent: curl/8.17.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.0 200 Connection established
< Proxy-agent: tinyproxy/1.11.0
<
* CONNECT phase completed
* CONNECT tunnel established, response 200
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* SSL Trust Anchors:
* CAfile: /cacert.pem
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 / secp256r1 / rsaEncryption
...
SAS Update Checker runs as a Kubernetes CronJob that uses the sas-orchestration image to query SAS services and generate a report of available updates and hot fixes. To route all of its outbound traffic through the proxy, add the required proxy environment variables to the container specification in that CronJob by copying the example manifest to site-config and then setting the proxy values for three variables. Don't forget to add the transformer to kustomization.yaml.
In my environment, the Update Checker's env block looks like this:
env:
...
- name: HTTP_PROXY
value: "http://tinyproxy.proxy-test.svc.cluster.local:3128"
- name: HTTPS_PROXY
value: "http://tinyproxy.proxy-test.svc.cluster.local:3128"
- name: NO_PROXY
value: "localhost,127.0.0.1,172.18.0.0/24,.cluster.local,.svc"
...
With this setup, all SAS Update Checker HTTP and HTTPS calls go through the same controlled proxy path, while internal calls stay inside the cluster and are not unnecessarily proxied.
Once the proxy configuration is in place, it is important to confirm that Update Checker actually uses it. I verified with two simple checks.
1. Observe proxy logs while Update Checker runs.
My Tinyproxy logs show that a connection is made to the SAS Entitlements Service when the Update Checker job kicks off:
CONNECT Dec 19 05:12:30.574 [1]: Connect (file descriptor 5): 10.42.0.155
CONNECT Dec 19 05:12:30.574 [1]: Request (file descriptor 5): CONNECT ses.sas.download:443 HTTP/1.1
INFO Dec 19 05:12:30.574 [1]: No upstream proxy for ses.sas.download
INFO Dec 19 05:12:30.574 [1]: opensock: opening connection to ses.sas.download:443
INFO Dec 19 05:12:30.577 [1]: opensock: getaddrinfo returned for ses.sas.download:443
CONNECT Dec 19 05:12:30.578 [1]: Established connection to host "ses.sas.download" using file descriptor 6.
INFO Dec 19 05:12:30.578 [1]: Not sending client headers to remote machine
INFO Dec 19 05:12:35.615 [1]: Closed connection between local client (fd:5) and remote client (fd:6)
2. Temporarily stop the proxy and confirm connection failure.
Scaling the proxy Deployment to zero replicas makes the service endpoint unreachable:
kubectl scale deploy/tinyproxy -n proxy-test --replicas=0
kubectl create job --from=cronjob/sas-update-checker \
sas-update-checker-proxy-failtest
With the proxy stopped, the Update Checker job log shows a connection failure:
Found 2 pods, using pod/sas-update-checker-proxy-failtest-2q6rn
{"level":"info","version":1,"source":"sas-orchestration","messageKey":"sas-orchestration.command.started","messageParameters":{"name":"report"},"properties":{"logger":"internal/apihelpers","caller":"apihelpers/apihelpers.go:44"},"timeStamp":"2025-12-19T05:13:08.670823+00:00","message":"The report command started"}
{"level":"info","version":1,"source":"sas-orchestration","messageKey":"sas-orchestration.current.time","messageParameters":{"now":"2025-12-19T05:13:08Z"},"properties":{"logger":"internal/cmd/report","caller":"report/report.go:59"},"timeStamp":"2025-12-19T05:13:08.671198+00:00","message":"Current time is '2025-12-19T05:13:08Z'"}
{"level":"info","version":1,"source":"sas-orchestration","messageKey":"sas-orchestration.command.failure","messageParameters":{"name":"report"},"properties":{"logger":"internal/apihelpers","caller":"apihelpers/apihelpers.go:50"},"timeStamp":"2025-12-19T05:13:09.699384+00:00","message":"The report command failed"}
Error loading entitlements file: "https://ses.sas.download/ses/entitlements.json"
Caused by:
* Failed to get 'https://ses.sas.download/ses/entitlements.json'
* Get "https://ses.sas.download/ses/entitlements.json": proxyconnect tcp: dial tcp 10.43.25.144:3128: connect: connection refused
When the Update Checker job fails with connection errors and succeeds again once the proxy is scaled back up, it demonstrates a hard dependency on the proxy path. This kind of failure/restore test is a good fit for CI/CD or operational readiness checks before fully hardening a dark‑site environment.
This simple demonstration shows how administrators can create a bridge between strict network controls and the operational need to stay informed about available updates and patches.
For further reading, these resources are particularly relevant:
Find more articles from SAS Global Enablement and Learning here.
Dive into keynotes, announcements and breakthroughs on demand.
Explore Now →The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.