BookmarkSubscribeRSS Feed

Implementing Domain Based Network Access Rules in Kubernetes

Started ‎11-27-2019 by
Modified ‎11-27-2019 by
Views 2,529

Editor's note: The SAS Analytics Cloud is a new Software as a Service (SaaS) offering from SAS using containerized technology on the SAS Cloud. You can find out more or take advantage of a SAS Analytics Cloud free trial.


This is one of several related articles the Analytics Cloud team has put together on while operating in this new digital realm. These articles address enhancements made to a production Kubernetes cluster by the support team in order to meet customer's application needs. They also provide guidance through a couple of technical issues encountered and the solutions they developed to solve these issues.


Articles in the sequence:

How to secure, pre-seed  and speed up Kubernetes deployment

Implementing Domain Based Network Access Rules in Kubernetes (current article)

Detect and manage idle applications in Kubernetes

Extending change management into Kubernetes

 

Implementing Domain Based Network Access Rules in Kubernetes

Kubernetes is a robust platform that does a lot of things well. The same applies to the many software defined networking solutions available to clusters. One operational issue allowed outbound access to web-based resources. In a world where cloud-based resources normally enjoy unrestricted outbound access, this use case may seem foreign. However, in several Kubernetes clusters we manage, we have an additional layer of security to administer that type of access.

 

SAS happens to deploy this technology in our SAS Analytics Cloud Kubernetes cluster as well. This is where we were able to identify which use cases internal to SAS, may also apply to clusters where we support our customers.

 

Why?

Ingress Proxy is a native Kubernetes feature allowing control over inbound connections to a namespace. In customer environments where security is paramount, our team needed a self-service way to control outbound web traffic from the clusters. Any outbound access needs managing to minimize security risks from connecting to untrusted sources. This control capability was missing as a Kubernetes native feature.

 

We realized managing access controls should not require a complex process, and the deployment of that solution should be simple. After looking at existing solutions external to SAS, we feel we didn’t see anything matching those criteria.

We did see in supporting application operations at SAS, we have many more slated for containerization. Our Information Security team tasked us to provide a solution for managing outbound application access from the internal applications on that very list. Fortunately, our Egress Proxy solution was already under development.

 

The Egress Proxy has benefited us inside SAS and could be useful across the Kubernetes community. In environments like ours, where security and outbound web access need controls, we have an easy to deploy and manage solution.

 

How it works

We designed the Egress Proxy process for simple deployment and usage. For connectivity, the proxy server will function just like any other one you have used. It accepts connections and tunnels you to the requested resource (CONNECT Method for SSL connections). Where it differs, other than residing on your Kubernetes cluster, is how it manages access control.

 

We take advantage of the Kubernetes API to monitor for modifications to the Egress Custom Resources, as well as monitor when adding and removing pods from each namespace. These events are important to us, as we manage access control lists (ACLs) on a per pod basis, and the master list comes from the CR applied per namespace. In Code 1 below we watch the Kubernetes API for new or updated Egress resources. Once seen, we can edit the namespace ACL.

 

list, err := c.egressclientset.CrdV1beta1().Egresses(namespace).List(metav1.ListOptions{}) 
if err != nil { 
klog.Infof("Issue querying namespace for Egress:", namespace, "Unable to sync") 
return err 
} 
 
if len(list.Items) > 0 { 
klog.Infof("EgressFound", list.Items) 
egressObj := list.Items[0] 
for _, site := range egressObj.Spec.SITES { 
sites = append(sites, site.URL) 
     } 
} 

Code 1: Monitor for new or updated resources

 

When a pod communicates through the proxy, the destination host it is attempting to access checks against the generated ACL list generated for the pod. Traffic proceeds, if the host matches an ACL on the list. If the ACL does not match, a forbidden status code returns to the process trying to access the web.


In Code 2 below, our proxy server takes the pod and destination URL information and forwards it to the authorization route in our controller. This auth module used is the ngx_http_auth_request_module compiles into your NGINX implementation to use.

 

server { 
      listen                         3128 default_server; 
      server_name  _; 
      auth_request /auth; 
 
      # dns resolver used by forward proxying - CoreDNS service 
      resolver                       10.254.0.10 ipv6=off; 
 
      # forward proxy for CONNECT request 
      proxy_connect; 
      proxy_connect_allow            443; 
      proxy_connect_connect_timeout  10s; 
      proxy_connect_read_timeout     10s; 
      proxy_connect_send_timeout     10s; 
 
      # forward proxy for non-CONNECT request 
      location / { 
          proxy_pass http://$host; 
          proxy_set_header Host $host; 
      } 
      location /auth { 
          internal; 
          proxy_set_header X-Real-IP  $remote_addr; 
          proxy_set_header Http-Host $http_host; 
          proxy_set_header X-Original-URI $request; 
          proxy_pass  http://localhost:8080/authcheck; 
      } 
    } 

Code 2: Pod and destination URL sent to the controller

 

After NGINXas sent the information to the auth route in the controller, we take the information do a site check to determine if traffic passes along. If the below check in Code 3 fails, a 403 returns to the client.

 

func (c *Controller) siteCheck(sourceIP string, host string) bool { 
 
    for _, acl := range c.aclList.Acls[sourceIP] { 
        klog.Infof("check host:", host, " against acl:", acl) 
        // Regex for if acl is for an expicit site e.g, google.com 
        domainReg, _ := regexp.Compile(`^` + acl + `$`) 
        // Regex for if acl is for all subdomains for a site e.g. - maps.google.com and news.google.com 
        subDomainReg, _ := regexp.Compile(`([a-zA-Z0-9][a-zA-Z0-9.-]*)*.?` + acl) 
        // If acls was requested as .<site>.com 
        if acl[0:1] == "." && subDomainReg.MatchString(host) { 
               return true 
        } else if acl[0:1] != "." && domainReg.MatchString(host) { // if site was entered as <site>.com, match exactly 
               return true 
        } 
    } 
return false 
} 

Code3: Site check

 

The pod's IP address determines the resulting ACL's, and thus are ephemeral. The ACLs creation and deletion follow the pod life cycle. This prevents any unwanted access as Pod IPs move around between namespaces.

 

The code snippet in Code 4 shows the function called when a pod terminates. This removes the stored ACL from memory.

 

func (c *Controller) deleteAcls(obj interface{}) { 
    pod, _ := obj.(*corev1.Pod) 
    podIp := pod.Status.PodIP 
    podNameSpace := pod.ObjectMeta.Namespace 
    delete(c.aclList.Acls, podIp) 
    klog.Infof("Pod ACL removed for recently deleted resource in:", podNameSpace) 
    return 
} 

Code 4: Pod termination function call

 

How to deploy

The process we have setup includes adding a Kubernetes Custom Resource Definition (CRD), so the Kubernetes API recognizes the egress type and the associated deployment for the Egress controller managing ACLs and run the forward Proxy.

 

For a more secure configuration, physical network access to the internet (firewall/security groups) is limited to a specific set of nodes. Once established, you add Taints to the nodes, to restrict scheduling of Egress pods. This will ensure any application wanting to access resources on the internet will need to pass through the provided proxy.

 

Flexible configuration

The Egress Proxy is a flexible solution. There are multiple ways of deployment. The process we described above is the Deny All mode. Below we will give a short explanation on each scenario

 

Deny All - As the name implies, will deny all request to tunnel network access, unless requesting a rule. This is where the Custom Resources (CR) in each namespace come into play. The requested host will match against the list of requested resources. If a pod lives in a namespace with no Egress CR or attempts to make a connection to a non-existent in its CR, it is denied.

 

Allow All - This mode allows the proxy to run in a familiar mode. It will allow all access to outbound resources. If a namespace has a CR, then it will only allow access to explicitly set rules. In allow all mode, we can also set a “Deny List”. The list feeds the Egress Controller via environment variable. This allows an administrator to run the proxy and control about a much smaller list of rules and allows everything except stated restrictions. Our use case for this scenario centered around access to the Kubernetes API in the cluster.

 

The snippet in Code 5 is from our “Allow All” Case. This permits all traffic in this scenario with two exceptions:

  1. There is an explicit egress resource supplied in the namespace
  2. There is a blacklist setup. If so, any traffic matching a hostname on that list is rejected
case "allow": 
if len(c.aclList.Acls[sourceIP]) == 0 { 
if !c.denyListInc { 
w.WriteHeader(http.StatusOK) 
return 
} else { 
denyCheck := c.denyListCheck(host) 
 
if denyCheck { 
w.WriteHeader(http.StatusForbidden) 
return 
} else { 
w.WriteHeader(http.StatusOK) 
return 
} 
} 
 
} else { 
 
aclCheck := c.siteCheck(sourceIP, host) 
 
if aclCheck { 
w.WriteHeader(http.StatusOK) 
return 
} else { 
w.WriteHeader(http.StatusForbidden) 
return 
} 
} 
} 

Code 5: Allow all mode

 

The proxy runs in a namespaced configuration when needed to ensure the proxy is not shared across namespaces. By using Role Based Access Control (RBAC) configuration in Kubernetes on the service account the Proxy will run under, limits seeing resources inside each application namespace.

 

Conclusion

When we talk about egress filtering it’s always in terms of limiting unauthorized access or potentially malicious traffic from leaving a trusted network. It’s a common technique to prevent data loss/data exfiltration, covert channel initiation or lateral movement within the same network. Egress access could allow an attacker to pull in additional tools from external networks, as well as push data back out of the environment. Putting easy controls to mitigate this risk is our aim. We hope to open source this code sometime in the future.

Version history
Last update:
‎11-27-2019 11:15 PM
Updated by:

SAS INNOVATE 2024

Innovate_SAS_Blue.png

Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.

If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website. 

Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags