BookmarkSubscribeRSS Feed
N224
Obsidian | Level 7

This article is designed to share an idea of testing/working with Viya4 on custom made environments, locally, in cloud, or wherever.

 

 

First of all prepare 4 VMs, 12 vCPU, 24 GB RAM each. Ubuntu 20.04 LTS. Here's my example:

# 1. VMKUB01 Master node : Ubuntu 20.04 LTS, 500 GB Storage Space, 24 GB RAM, 12 vCPU
# 2. VMKUB02 Worker node 1 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 3. VMKUB03 Worker node 2 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 4. VMKUB03 Worker node 3 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU

 

You cannot deploy Viya4 on single machine, kubernetes doesn't allow to run 110+ pods on one node :). You could use whole script or do this step by step to better understand what's going on.

Remember that 80% is preparing cluster.. 20% is Viya files configuration and deployment.

 

Scripts and whole idea are designed in some steps:

  1. Prepare VMs.
  2. Prepare files from portal.
  3. 1st Script on master node:
    • # Step 1 , preinstall
    • # Step 2 , Kubernets install
    • # Step 3, turn off swap
    • # Step 4, OS Tuning
    • # Step 5, Installing more packets
    • # Step 6, Docker install
    • # Step 7, create kubernetes cluster
    • # Step 8, Install kubernetes networking - calico and metallb
    • # Step 9, Install helm
    • # Step 10, Install ingress
    • # Step 11, Install and configure NFS Server
    • # Step 12, Install NFS Subdir External Provisoiner
    • # Step 13, Creating pod security policies
    • # Step 14, check if NFS works
    • # Step 15, SCP files to workers
  4. Script on each worker node:
    • # Step 1 , preinstall
    • # Step 2 , Kubernets install
    • # Step 3, turn off swap
    • # Step 4, OS Tuning
    • # Step 5, Installing more packets
    • # Step 6, Docker install
    • # Step 7, Joining kubernetes cluster
  5. Now you should be able to list working nodes
    sudo su -l root
    root@vmkub01:~# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    vmkub01 Ready control-plane,master 2d15h v1.21.5
    vmkub02 Ready <none> 2d15h v1.21.5
    vmkub03 Ready <none> 2d15h v1.21.5
    vmkub04 Ready <none> 2d15h v1.21.5
  6. Install portainer (it's quite helpful)
    root@vmkub01:~# kubectl get all -n portainer
    NAME                             READY   STATUS    RESTARTS   AGE
    pod/portainer-5d6dbf85dd-2mdtl   1/1     Running   1          2d15h
    
    NAME                TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                        AGE
    service/portainer   LoadBalancer   10.111.183.207   10.0.110.101   9000:31659/TCP,9443:31248/TCP,8000:30691/TCP   2d15h
    
    NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/portainer   1/1     1            1           2d15h
    
    NAME                                   DESIRED   CURRENT   READY   AGE
    replicaset.apps/portainer-5d6dbf85dd   1         1         1       2d15h
  7. 2nd Script on Master node:
    • # Step 1, Install Kustomize
    • # Step 2, Install CertManager
    • # Step 3, Create CA and Issuer with CertManager for full-TLS deployment
    • # Step 4, deploying operator
    • # Step 5, building directory structure
    • # Step 6, Installation
      • # Namespace
      • # Ingress
      • # TLS
      • # StorageClass
      • # License
      • # SAS Orchestration
      • # Install
  8. After some time check status of sasoperator with
    kubectl -n sasoperator get sasdeployment
    
    root@vmkub01:~# kubectl -n sasoperator get sasdeployment
    NAME STATE CADENCENAME CADENCEVERSION CADENCERELEASE AGE
    sas-viya SUCCEEDED stable 2021.1.6 20211104.1636065570555 2d11h

I really enjoyed watching progress with portainer 🙂

 

Prepare deployment files from portal (mine are):

License file : SASViyaV4_9XXXXX_0_stable_2021.1.6_license_2021-10-21T071701.jwt
Certs file : SASViyaV4_9XXXXX_certs.zip
TGZ file : SASViyaV4_9XXXXX_0_stable_2021.1.6_20211020.1634741786565_deploymentAssets_2021-10-21T071710.tgz

At the installation time use same account (it helps a lot) with same pw at each machine. My account is called gabos

After successful OS installation log in via ssh by your account on Master node.

Copy via scp or any other manager your deployment files to /home/gabos/ (your account ofc)

Edit new file (vi script.sh), paste whole script below and edit it with your needs (variables), make it executable (chmod +x script.sh), and run.

 

1st Script on master node :

#!/bin/bash

# This script is designed and released by Gabos Software, feel free to use it and make it better


## Configuration section, it has important variables in order to prepare whole environment. 
# It's hardcoded for environemnt like :
# 1. VMKUB01 Master node : Ubuntu 20.04 LTS, 500 GB Storage Space, 24 GB RAM, 12 vCPU
# 2. VMKUB02 Worker node 1 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 3. VMKUB03 Worker node 2 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 4. VMKUB03 Worker node 3 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# I prepared all OSes with user account called "gabos"

# HOST
export NAZWADNS="vmkub01.local" # This is the dns name by that you want to connect to your website , you will access SAS Viya only by this name like https://vmkub01.local , remember that  this name should resolv loadbalancer IP address at the host station connection to Viya
export KUBEJOIN="/home/gabos/KUBEJOIN.log" # This is path used where would be located file with kubernetes access token

# NFS :
export STORAGEFOLDER="/home/saspodstorage" # This folder would be created in order to store presistents volumes for NFS server
export NFSRULES="*(rw,sync,no_subtree_check,crossmnt,fsid=0)" # you shouldn't touch this
export NFSNETWORK="10.0.110.0/24" # it's no longer used, NFS would be accessbile "world"-style
export NFSSERVER="10.0.110.95" # This is yout VMKUB01 - master node, nfs server IP

# NODES :
export WORKER1="10.0.110.96" # first worker node
export WORKER1DNS="vmkub02.local" # it wouldn't be used but in any case - fill this
export WORKER2="10.0.110.97" # second worker node
export WORKER2DNS="vmkub03.local" # it wouldn't be used but in any case - fill this
export WORKER3="10.0.110.98" # third worker node
export WORKER3DNS="vmkub04.local" # it wouldn't be used but in any case - fill this
export KUBEJOINREMOTE1="gabos@$WORKER1:/home/gabos/KUBEJOIN.log" # here is path for kubejoin.log file in worker1, te file will be send via scp from master to worker 
export KUBEJOINREMOTE2="gabos@$WORKER2:/home/gabos/KUBEJOIN.log" # here is path for kubejoin.log file in worker2, te file will be send via scp from master to worker 
export KUBEJOINREMOTE3="gabos@$WORKER3:/home/gabos/KUBEJOIN.log" # here is path for kubejoin.log file in worker3, te file will be send via scp from master to worker 

# KUBERNETES :
export NETWORKCIDR="192.168.0.0/16" # this is inner kubernetes network, used when kubeadm create command goes in
export NETWORKADDR="10.0.110.95" # should me the same as NFSSERVER , it's master node IP Address
export METALLBADDR="10.0.110.100-10.0.110.120" # you should provide some addresses for load balancers, this ip will be used for ingress nginx to route your traffic.
export SASNAMESPACE="sasoperator" # this is namespace for sasoperator and sas deployment, you shouldn't change it
# -------------------------------------------------------------------------------------------------------------------------------------

# !!!!!!!!!!! DO NOT TOUCH THE VARIABLES BELOW !!!!!!!!!!!!
export PREINST1="vim git curl wget pip apt-transport-https nfs-common"
export PREINST2="gnupg2 software-properties-common ca-certificates"
export DOCKERINST="containerd.io docker-ce docker-ce-cli"
export KUBEVERSION="1.21.5-00"
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

# Step 1 , preinstall
echo -e "Preinstalling \v packets\v"
sudo apt update
sudo apt -y install $PREINST1
clear
# --------------------------------------------------------

# Step 2 , Kubernets install 
echo -e "Installing kubernetes\v"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt-get install -y kubelet=$KUBEVERSION kubectl=$KUBEVERSION kubeadm=$KUBEVERSION
sudo apt-mark hold kubelet kubeadm kubectl
clear
# --------------------------------------------------------

# Step 3, turn off swap
echo -e "Turning\v off\v SWAPu\v"
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
clear
# --------------------------------------------------------

# Step 4, OS Tuning
echo -e "Tuning\v OS\v"
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
clear
# --------------------------------------------------------


# Step, 5 Installing more packets
echo -e "Installing \v packets 2 \v"
sudo apt install -y $PREINST2
clear
# --------------------------------------------------------


# Step 6, Docker install
echo -e "Installing \v Docker\v"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y $DOCKERINST
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
clear
# --------------------------------------------------------

# Step 7, create kubernetes cluster
echo -e "Rozpoczynam\v konfigurajce\v Kubernetes\v"
sudo systemctl enable kubelet
sudo kubeadm config images pull
sudo kubeadm init --pod-network-cidr=$NETWORKCIDR --apiserver-advertise-address=$NETWORKADDR >> $KUBEJOIN
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=$HOME/.kube/config" | tee -a ~/.bashrc
clear
# --------------------------------------------------------


# Step 8, Install kubernetes networking - calico and metallb
echo -e "Kubernets \v configuration\v"
wget https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.3/manifests/metallb.yaml
sudo tee metallbcm.yml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - $METALLBADDR
EOF
kubectl create -f metallbcm.yml
clear
# --------------------------------------------------------

# Step 9, Install helm
echo -e "Installing\v HELM\v"
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
clear
# --------------------------------------------------------

# Step 10, Install ingress
echo -e "Ingress\v Nginx 0.43 \v"
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/baremetal/deploy.yaml
sed -i 's/NodePort/LoadBalancer/g' deploy.yaml
kubectl create ns ingress-nginx
kubectl apply -f deploy.yaml
kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=180s
clear
# --------------------------------------------------------

# Step 11, Install and configure NFS Server 

echo -e "NFS \v"
sudo apt -y install nfs-kernel-server
sudo cat /proc/fs/nfsd/versions
sudo mkdir -p /srv/nfs4/nfs-share
sudo mkdir -p $STORAGEFOLDER
sudo mount --bind $STORAGEFOLDER /srv/nfs4/nfs-share
sudo echo "$STORAGEFOLDER /srv/nfs4/nfs-share  none   bind   0   0" >> /etc/fstab
sudo mount -a
sudo ufw allow from $NFSNETWORK to any port nfs
sudo echo "/srv/nfs4/nfs-share         $NFSRULES" >> /etc/exports
sudo chmod 777 -R $STORAGEFOLDER
sudo exportfs -ar
sudo exportfs -v
sudo systemctl restart nfs-server
sleep 1m
clear
# --------------------------------------------------------

# Step 12, Install NFS Subdir External Provisoiner
kubectl create ns $SASNAMESPACE
echo -e "NFS \v Subdir \v  Subdir \v External \v Provisioner"
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=$NFSSERVER \
    --set nfs.path=/srv/nfs4/nfs-share \
	--set storageClass.defaultClass=true \
	--set storageClass.accessModes=ReadWriteMany \
	--namespace $SASNAMESPACE

# --------------------------------------------------------

# Step 13, Creating pod security policies
echo -e "PSP\v"

tee psp-privileged.yaml << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: privileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  privileged: true
  allowPrivilegeEscalation: true
  allowedCapabilities:
  - '*'
  volumes:
  - '*'
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  hostIPC: true
  hostPID: true
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
EOF


tee psp-baseline.yaml << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: baseline
  annotations:
    # Optional: Allow the default AppArmor profile, requires setting the default.
    apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
    apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  privileged: false
  # The moby default capability set, minus NET_RAW
  allowedCapabilities:
    - 'CHOWN'
    - 'DAC_OVERRIDE'
    - 'FSETID'
    - 'FOWNER'
    - 'MKNOD'
    - 'SETGID'
    - 'SETUID'
    - 'SETFCAP'
    - 'SETPCAP'
    - 'NET_BIND_SERVICE'
    - 'SYS_CHROOT'
    - 'KILL'
    - 'AUDIT_WRITE'
  # Allow all volume types except hostpath
  volumes:
    # 'core' volume types
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
    - 'csi'
    - 'persistentVolumeClaim'
    - 'ephemeral'
    # Allow all other non-hostpath volume types.
    - 'awsElasticBlockStore'
    - 'azureDisk'
    - 'azureFile'
    - 'cephFS'
    - 'cinder'
    - 'fc'
    - 'flexVolume'
    - 'flocker'
    - 'gcePersistentDisk'
    - 'gitRepo'
    - 'glusterfs'
    - 'iscsi'
    - 'nfs'
    - 'photonPersistentDisk'
    - 'portworxVolume'
    - 'quobyte'
    - 'rbd'
    - 'scaleIO'
    - 'storageos'
    - 'vsphereVolume'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  readOnlyRootFilesystem: false
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    # This policy assumes the nodes are using AppArmor rather than SELinux.
    # The PSP SELinux API cannot express the SELinux Pod Security Standards,
    # so if using SELinux, you must choose a more restrictive default.
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
EOF

tee psp-restricted.yaml << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
    apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
    apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
spec:
  privileged: false
  # Required to prevent escalations to root.
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  # Allow core volume types.
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
    - 'csi'
    - 'persistentVolumeClaim'
    - 'ephemeral'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    # Require the container to run without root privileges.
    rule: 'MustRunAsNonRoot'
  seLinux:
    # This policy assumes the nodes are using AppArmor rather than SELinux.
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false
EOF

kubectl apply -f psp-restricted.yaml
kubectl apply -f psp-baseline.yaml
kubectl apply -f psp-privileged.yaml

kubectl get psp restricted -o custom-columns=NAME:.metadata.name,"SECCOMP":".metadata.annotations.seccomp\.security\.alpha\.kubernetes\.io/allowedProfileNames"
clear
# --------------------------------------------------------

# Step 14, check if NFS works
sudo tee test-pod.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs-client
  resources:
    requests:
      storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF

kubectl apply -f test-pod.yaml -n sasoperator
sleep 1m
kubectl describe pod test-pod -n sasoperator
clear
# --------------------------------------------------------

# Step 15, SCP files to workers

scp $KUBEJOIN $KUBEJOINREMOTE1
scp $KUBEJOIN $KUBEJOINREMOTE2
scp $KUBEJOIN $KUBEJOINREMOTE3

# --------------------------------------------------------

1st Script on each worker :

#!/bin/bash

# This script is designed and released by Gabos Software, feel free to use it and make it better


## Configuration section, it has important variables in order to prepare whole environment. 
# It's hardcoded for environemnt like :
# 1. VMKUB01 Master node : Ubuntu 20.04 LTS, 500 GB Storage Space, 24 GB RAM, 12 vCPU
# 2. VMKUB02 Worker node 1 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 3. VMKUB03 Worker node 2 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 4. VMKUB03 Worker node 3 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# I prepared all OSes with user account called "gabos"

# HOST
export KUBEJOIN="/home/gabos/KUBEJOIN.log" # local path to kubejoin file (scped before)

# !!!!!!!!!!! DO NOT TOUCH THE VARIABLES BELOW !!!!!!!!!!!!
export PREINST1="vim git curl wget pip apt-transport-https nfs-common"
export PREINST2="gnupg2 software-properties-common ca-certificates"
export DOCKERINST="containerd.io docker-ce docker-ce-cli"
export KUBEVERSION="1.21.5-00"
# -----------------------------------------------------------------

# Step 1 , preinstall
echo -e "Preinstall\v 1\v"
sudo apt update
sudo apt -y install $PREINST1
clear


# Step 2 , Kubernets install 
echo -e "Installing kubernetes\v"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt-get install -y kubelet=$KUBEVERSION kubectl=$KUBEVERSION kubeadm=$KUBEVERSION
sudo apt-mark hold kubelet kubeadm kubectl
clear
# --------------------------------------------------------


# Step 3, turn off swap
echo -e "Turning\v off\v SWAPu\v"
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
clear
# --------------------------------------------------------


# Step 4, OS Tuning
echo -e "Tuning\v OS\v"
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
clear
# --------------------------------------------------------


# Step, 5 Installing more packets
echo -e "Installing \v packets 2 \v"
sudo apt install -y $PREINST2
clear
# --------------------------------------------------------


# Step 6, Docker install
echo -e "Installing \v Docker\v"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y $DOCKERINST
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
clear
# --------------------------------------------------------

# Step 7, joining kubernetes cluster
echo -e "Joining \v kuberenets cluster\v"
sudo systemctl enable kubelet
cat $KUBEJOIN | grep 'kubeadm join' -A 1$
export JOINCMD=$(cat $KUBEJOIN | grep 'kubeadm join' -A 1)
export JOINCMD=$(echo $JOINCMD | sed  's/\\//g')
sudo $JOINCMD

# --------------------------------------------------------

2nd Script on master node :

#!/bin/bash

# This script is designed and released by Gabos Software, feel free to use it and make it better


## Configuration section, it has important variables in order to prepare whole environment. 
# It's hardcoded for environemnt like :
# 1. VMKUB01 Master node : Ubuntu 20.04 LTS, 500 GB Storage Space, 24 GB RAM, 12 vCPU
# 2. VMKUB02 Worker node 1 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 3. VMKUB03 Worker node 2 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 4. VMKUB03 Worker node 3 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# I prepared all OSes with user account called "gabos"

export NAZWADNS="vmkub01.local" # This is the dns name by that you want to connect to your website , you will access SAS Viya only by this name like https://vmkub01.local , remember that  this name should resolv loadbalancer IP address at the host station connection to Viya


# Config
export SCIEZKA="/home/gabos" # work folder where are all the files
export PLIKDEPLOY="SASViyaV4_9XXXX_0_stable_2021.1.6_20211020.1634741786565_deploymentAssets_2021-10-21T071710.tgz" # tgz deploy file
export PLIKLICENCJA="SASViyaV4_9XXXX_0_stable_2021.1.6_license_2021-10-21T071701.jwt" # license file
export PLIKCERTS="SASViyaV4_9XXXX_certs.zip" # certs file
export SASNAMESPACE="sasoperator" # this is namespace for sasoperator and sas deployment, you shouldn't change it
export CADENCE="stable" # installation type, it could be lts or stable, read about it in documentation or just copy from deploy file name
export CADENCEVERSION="2021.1.6" # full version description, read about it in documentation or just copy from deploy file name
# all of the files should be in path called "SCIEZKA"

# Step 1, Install Kustomize
snap install kustomize

# Step 2, Install CertManager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.5.4 \
  --set installCRDs=true

# Step 3, Create CA and Issuer with CertManager for full-TLS deployment
cd $SCIEZKA
kubectl create namespace sandbox

tee CAAuthority.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
  name: sandbox
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: my-selfsigned-ca
  namespace: sandbox
spec:
  isCA: true
  commonName: my-selfsigned-ca
  secretName: root-secret
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: selfsigned-issuer
    kind: ClusterIssuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: sas-viya-issuer
  namespace: sandbox
spec:
  ca:
    secretName: root-secret
EOF

kubectl apply -f CAAuthority.yaml -n sandbox

# Step 4, deploying operator
cd $SCIEZKA
mkdir operator-deploy
cp $PLIKDEPLOY operator-deploy
cd operator-deploy
tar xvfz $PLIKDEPLOY
cp -r sas-bases/examples/deployment-operator/deploy/* .
chmod +w site-config/transformer.yaml
# Zmiana wartosci dla namespace i clusterbinding
sed -i 's/{{ NAME-OF-CLUSTERROLEBINDING }}/sasoperator/g' site-config/transformer.yaml
sed -i 's/{{ NAME-OF-NAMESPACE }}/sasoperator/g' site-config/transformer.yaml
kustomize build . | kubectl -n sasoperator apply -f -
kubectl get all -n sasoperator
# ------------------

# Step 5, building directory structure
cd $SCIEZKA
mkdir deploy
cp $PLIKDEPLOY deploy
cd deploy
tar xvfz $PLIKDEPLOY
rm -rf $PLIKDEPLOY
mkdir site-config
# -------------------------


# Step 6, Installation
cd $SCIEZKA
cd deploy
tee kustomization.yaml << EOF
namespace: {{ NAME-OF-NAMESPACE }} 
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer 
- sas-bases/overlays/network/networking.k8s.io 
- sas-bases/overlays/cas-server
- sas-bases/overlays/internal-postgres
# If your deployment contains programming-only offerings only, comment out the next line
- sas-bases/overlays/internal-elasticsearch
- sas-bases/overlays/update-checker
- sas-bases/overlays/cas-server/auto-resources 
configurations:
- sas-bases/overlays/required/kustomizeconfig.yaml
transformers:
# If your deployment does not support privileged containers or if your deployment
# contains programming-only offerings, comment out the next line 
- sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml
- sas-bases/overlays/required/transformers.yaml
- site-config/security/cert-manager-provided-ingress-certificate.yaml 
- sas-bases/overlays/cas-server/auto-resources/remove-resources.yaml 
# If your deployment contains programming-only offerings only, comment out the next line
- sas-bases/overlays/internal-elasticsearch/internal-elasticsearch-transformer.yaml
# Mount information
# - site-config/{{ DIRECTORY-PATH }}/cas-add-host-mount.yaml
components:
- sas-bases/components/security/core/base/full-stack-tls 
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls 
patches:
- path: site-config/storageclass.yaml 
  target:
    kind: PersistentVolumeClaim
    annotationSelector: sas.com/component-name in (sas-backup-job,sas-data-quality-services,sas-commonfiles,sas-cas-operator,sas-pyconfig)
# License information
# secretGenerator:
# - name: sas-license
#   type: sas.com/license
#   behavior: merge
#   files:
#   - SAS_LICENSE=license.jwt
configMapGenerator:
- name: ingress-input
  behavior: merge
  literals:
  - INGRESS_HOST={{ NAME-OF-INGRESS-HOST }}
- name: sas-shared-config
  behavior: merge
  literals:
  - SAS_SERVICES_URL=https://{{ NAME-OF-INGRESS-HOST }}:{{ PORT }} 
  # - SAS_URL_EXTERNAL_VIYA={{ EXTERNAL-PROXY-URL }}
EOF

# Change {{ NAME-OF-NAMESPACE }}
sed -i 's/{{ NAME-OF-NAMESPACE }}/sasoperator/g' kustomization.yaml
# Del auto-resources
sed -i '/auto-resources/d' kustomization.yaml 

# Ingress
export INGRESS_HOST=$(kubectl -n ingress-nginx get service ingress-nginx-controller  -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
sed -i "s/{{ NAME-OF-INGRESS-HOST }}/$NAZWADNS/g" kustomization.yaml
export INGRESS_HTTPS_PORT=$(kubectl -n ingress-nginx get service ingress-nginx-controller  -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
sed -i "s/{{ PORT }}/$INGRESS_HTTPS_PORT/g" kustomization.yaml


# TLS
cd $SCIEZKA
cd deploy
mkdir site-config/security
cp sas-bases/examples/security/cert-manager-provided-ingress-certificate.yaml site-config/security/cert-manager-provided-ingress-certificate.yaml
sed -i "s/{{ CERT-MANAGER-ISSUER-NAME }}/sas-viya-issuer/g" site-config/security/cert-manager-provided-ingress-certificate.yaml
sed -i "s/{{ CERT-MANAGER_ISSUER_NAME }}/sas-viya-issuer/g" site-config/security/cert-manager-provided-ingress-certificate.yaml 

# StorageClass
cd $SCIEZKA
cd deploy
export STORAGECLASS=$(kubectl get storageclass -o jsonpath='{.items[*].metadata.name}')
tee site-config/storageclass.yaml << EOF
kind: RWXStorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
 name: wildcard
spec:
 storageClassName: $STORAGECLASS
EOF


# License
cd $SCIEZKA
mkdir license
cp $PLIKLICENCJA license

# SAS Orchestration
docker pull cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.64.0-20211012.1634057996496
docker tag cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.64.0-20211012.1634057996496 sas-orchestration

mkdir /home/user
mkdir /home/user/kubernetes
cp $HOME/.kube/config /home/user/kubernetes/config
chmod 777 /home/user/kubernetes/config

kubectl get psp restricted -o custom-columns=NAME:.metadata.name,"SECCOMP":".metadata.annotations.seccomp\.security\.alpha\.kubernetes\.io/allowedProfileNames"


cd $SCIEZKA

# Install 
docker run --rm \
  -v $(pwd):/tmp/files \
  sas-orchestration \
  create sas-deployment-cr \
  --deployment-data /tmp/files/$PLIKCERTS \
  --license /tmp/files/$PLIKLICENCJA \
  --user-content /tmp/files/deploy \
  --cadence-name $CADENCE \
  --cadence-version $CADENCEVERSION \
 > viya4-sasdeployment.yaml

kubectl apply -f viya4-sasdeployment.yaml -n sasoperator

kubectl -n sasoperator get sasdeployment

 

 

4 REPLIES 4
gwootton
SAS Super FREQ
Very cool! A couple things I noticed:
We currently require kustomize version 3.7.0, so your "snap install kustomize" might need to instead pull and extract https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv3.7.0/kustomize_v3.7.0_l... into your PATH (it doesn't look like kustomize 3.7.0 is listed as a snap release).
You are creating a cert-manager issuer "sas-viya-issuer" in your sandbox namespace. The assets in sas-bases/overlays/cert-manager-issuer also creates an issuer by that name (tied to the sas-viya-self-signing-issuer) so this may be confusing.
--
Greg Wootton | Principal Systems Technical Support Engineer
N224
Obsidian | Level 7
I will look into documentation and verify scripts, if some steps should be changed - I'll update it 😉
N224
Obsidian | Level 7

Hey @gwootton , I've reviewed configuration and :

1. Line with kubectl apply -f CAAtuhority.yaml could be disabled in order to prevent creating sandbox namespace with custom CA and sas-viya-issuer

2. I haven't found any official clues about kustomize version, where did you find that ? I looked through 2021.1.6 deployment guide and found none.

 

In some days I will open project on gitlab with scripts 🙂

 

gwootton
SAS Super FREQ
Here's the documentation link for 2021.1.6 showing a kustomize version requirement of 3.7.0:

SAS Viya Operations - Virtual Infrastructure Requirements - Kubernetes Client Machine Requirements
https://go.documentation.sas.com/doc/en/itopscdc/v_019/itopssr/n1ika6zxghgsoqn1mq4bck9dx695.htm#n0u8...
--
Greg Wootton | Principal Systems Technical Support Engineer

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

Get Started with SAS Information Catalog in SAS Viya

SAS technical trainer Erin Winters shows you how to explore assets, create new data discovery agents, schedule data discovery agents, and much more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 4 replies
  • 4289 views
  • 5 likes
  • 2 in conversation