<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: [VIYA4] Deploy Viya 4 Anywhere - Tutorial, Script, Locally or in cloud :) in Administration and Deployment</title>
    <link>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779138#M23391</link>
    <description>I will look into documentation and verify scripts, if some steps should be changed - I'll update it &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;</description>
    <pubDate>Mon, 08 Nov 2021 14:58:26 GMT</pubDate>
    <dc:creator>N224</dc:creator>
    <dc:date>2021-11-08T14:58:26Z</dc:date>
    <item>
      <title>[VIYA4] Deploy Viya 4 Anywhere - Tutorial, Script, Locally or in cloud :)</title>
      <link>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779037#M23385</link>
      <description>&lt;P&gt;This article is designed to share an idea of testing/working with Viya4 on custom made environments, locally, in cloud, or wherever.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First of all prepare 4 VMs, 12 vCPU, 24 GB RAM each. Ubuntu 20.04 LTS. Here's my example:&lt;/P&gt;
&lt;PRE&gt;# 1. VMKUB01 Master node : Ubuntu 20.04 LTS, 500 GB Storage Space, 24 GB RAM, 12 vCPU
# 2. VMKUB02 Worker node 1 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 3. VMKUB03 Worker node 2 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 4. VMKUB03 Worker node 3 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You cannot deploy Viya4 on single machine, kubernetes doesn't allow to run 110+ pods on one node :). You could use whole script or do this step by step to better understand what's going on.&lt;/P&gt;
&lt;P&gt;Remember that 80% is preparing cluster.. 20% is Viya files configuration and deployment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Scripts and whole idea are designed in some steps:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Prepare VMs.&lt;/LI&gt;
&lt;LI&gt;Prepare files from portal.&lt;/LI&gt;
&lt;LI&gt;1st Script on master node:
&lt;UL&gt;
&lt;LI&gt;# Step 1 , preinstall&lt;/LI&gt;
&lt;LI&gt;# Step 2 , Kubernets install&lt;/LI&gt;
&lt;LI&gt;# Step 3, turn off swap&lt;/LI&gt;
&lt;LI&gt;# Step 4, OS Tuning&lt;/LI&gt;
&lt;LI&gt;# Step 5, Installing more packets&lt;/LI&gt;
&lt;LI&gt;# Step 6, Docker install&lt;/LI&gt;
&lt;LI&gt;# Step 7, create kubernetes cluster&lt;/LI&gt;
&lt;LI&gt;# Step 8, Install kubernetes networking - calico and metallb&lt;/LI&gt;
&lt;LI&gt;# Step 9, Install helm&lt;/LI&gt;
&lt;LI&gt;# Step 10, Install ingress&lt;/LI&gt;
&lt;LI&gt;# Step 11, Install and configure NFS Server&lt;/LI&gt;
&lt;LI&gt;# Step 12, Install NFS Subdir External Provisoiner&lt;/LI&gt;
&lt;LI&gt;# Step 13, Creating pod security policies&lt;/LI&gt;
&lt;LI&gt;# Step 14, check if NFS works&lt;/LI&gt;
&lt;LI&gt;# Step 15, SCP files to workers&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Script on each worker node:
&lt;UL&gt;
&lt;LI&gt;# Step 1 , preinstall&lt;/LI&gt;
&lt;LI&gt;# Step 2 , Kubernets install&lt;/LI&gt;
&lt;LI&gt;# Step 3, turn off swap&lt;/LI&gt;
&lt;LI&gt;# Step 4, OS Tuning&lt;/LI&gt;
&lt;LI&gt;# Step 5, Installing more packets&lt;/LI&gt;
&lt;LI&gt;# Step 6, Docker install&lt;/LI&gt;
&lt;LI&gt;# Step 7, Joining kubernetes cluster&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Now you should be able to list working nodes &lt;BR /&gt;
&lt;PRE&gt;sudo su -l root
root@vmkub01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vmkub01 Ready control-plane,master 2d15h v1.21.5
vmkub02 Ready &amp;lt;none&amp;gt; 2d15h v1.21.5
vmkub03 Ready &amp;lt;none&amp;gt; 2d15h v1.21.5
vmkub04 Ready &amp;lt;none&amp;gt; 2d15h v1.21.5&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;Install portainer (it's quite helpful)&lt;BR /&gt;
&lt;PRE&gt;root@vmkub01:~# kubectl get all -n portainer
NAME                             READY   STATUS    RESTARTS   AGE
pod/portainer-5d6dbf85dd-2mdtl   1/1     Running   1          2d15h

NAME                TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                        AGE
service/portainer   LoadBalancer   10.111.183.207   10.0.110.101   9000:31659/TCP,9443:31248/TCP,8000:30691/TCP   2d15h

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/portainer   1/1     1            1           2d15h

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/portainer-5d6dbf85dd   1         1         1       2d15h&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-family: inherit;"&gt;2nd Script on Master node:&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;# Step 1, Install Kustomize&lt;/LI&gt;
&lt;LI&gt;# Step 2, Install CertManager&lt;/LI&gt;
&lt;LI&gt;# Step 3, Create CA and Issuer with CertManager for full-TLS deployment&lt;/LI&gt;
&lt;LI&gt;# Step 4, deploying operator&lt;/LI&gt;
&lt;LI&gt;# Step 5, building directory structure&lt;/LI&gt;
&lt;LI&gt;# Step 6, Installation
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="font-family: inherit;"&gt;# Namespace&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-family: inherit;"&gt;# Ingress&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;# TLS&lt;/LI&gt;
&lt;LI&gt;# StorageClass&lt;/LI&gt;
&lt;LI&gt;# License&lt;/LI&gt;
&lt;LI&gt;# SAS Orchestration&lt;/LI&gt;
&lt;LI&gt;# Install&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-family: inherit;"&gt;After some time check status of sasoperator with&lt;/SPAN&gt;
&lt;PRE&gt;kubectl -n sasoperator get sasdeployment

root@vmkub01:~# kubectl -n sasoperator get sasdeployment
NAME STATE CADENCENAME CADENCEVERSION CADENCERELEASE AGE
sas-viya SUCCEEDED stable 2021.1.6 20211104.1636065570555 2d11h&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;I really enjoyed watching progress with portainer &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Prepare deployment files from portal (mine are):&lt;/P&gt;
&lt;PRE&gt;License file : SASViyaV4_9XXXXX_0_stable_2021.1.6_license_2021-10-21T071701.jwt
Certs file : SASViyaV4_9XXXXX_certs.zip
TGZ file : SASViyaV4_9XXXXX_0_stable_2021.1.6_20211020.1634741786565_deploymentAssets_2021-10-21T071710.tgz&lt;/PRE&gt;
&lt;P&gt;At the installation time use same account (it helps a lot) with same pw at each machine. My account is called gabos&lt;/P&gt;
&lt;P&gt;After successful OS installation log in via ssh by your account on Master node.&lt;/P&gt;
&lt;P&gt;Copy via scp or any other manager your deployment files to /home/gabos/ (your account ofc)&lt;/P&gt;
&lt;P&gt;Edit new file (vi script.sh), paste whole script below and edit it with your needs (variables), make it executable (chmod +x script.sh), and run.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1st Script on master node :&lt;/P&gt;
&lt;PRE&gt;#!/bin/bash

# This script is designed and released by Gabos Software, feel free to use it and make it better


## Configuration section, it has important variables in order to prepare whole environment. 
# It's hardcoded for environemnt like :
# 1. VMKUB01 Master node : Ubuntu 20.04 LTS, 500 GB Storage Space, 24 GB RAM, 12 vCPU
# 2. VMKUB02 Worker node 1 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 3. VMKUB03 Worker node 2 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 4. VMKUB03 Worker node 3 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# I prepared all OSes with user account called "gabos"

# HOST
export NAZWADNS="vmkub01.local" # This is the dns name by that you want to connect to your website , you will access SAS Viya only by this name like https://vmkub01.local , remember that  this name should resolv loadbalancer IP address at the host station connection to Viya
export KUBEJOIN="/home/gabos/KUBEJOIN.log" # This is path used where would be located file with kubernetes access token

# NFS :
export STORAGEFOLDER="/home/saspodstorage" # This folder would be created in order to store presistents volumes for NFS server
export NFSRULES="*(rw,sync,no_subtree_check,crossmnt,fsid=0)" # you shouldn't touch this
export NFSNETWORK="10.0.110.0/24" # it's no longer used, NFS would be accessbile "world"-style
export NFSSERVER="10.0.110.95" # This is yout VMKUB01 - master node, nfs server IP

# NODES :
export WORKER1="10.0.110.96" # first worker node
export WORKER1DNS="vmkub02.local" # it wouldn't be used but in any case - fill this
export WORKER2="10.0.110.97" # second worker node
export WORKER2DNS="vmkub03.local" # it wouldn't be used but in any case - fill this
export WORKER3="10.0.110.98" # third worker node
export WORKER3DNS="vmkub04.local" # it wouldn't be used but in any case - fill this
export KUBEJOINREMOTE1="gabos@$WORKER1:/home/gabos/KUBEJOIN.log" # here is path for kubejoin.log file in worker1, te file will be send via scp from master to worker 
export KUBEJOINREMOTE2="gabos@$WORKER2:/home/gabos/KUBEJOIN.log" # here is path for kubejoin.log file in worker2, te file will be send via scp from master to worker 
export KUBEJOINREMOTE3="gabos@$WORKER3:/home/gabos/KUBEJOIN.log" # here is path for kubejoin.log file in worker3, te file will be send via scp from master to worker 

# KUBERNETES :
export NETWORKCIDR="192.168.0.0/16" # this is inner kubernetes network, used when kubeadm create command goes in
export NETWORKADDR="10.0.110.95" # should me the same as NFSSERVER , it's master node IP Address
export METALLBADDR="10.0.110.100-10.0.110.120" # you should provide some addresses for load balancers, this ip will be used for ingress nginx to route your traffic.
export SASNAMESPACE="sasoperator" # this is namespace for sasoperator and sas deployment, you shouldn't change it
# -------------------------------------------------------------------------------------------------------------------------------------

# !!!!!!!!!!! DO NOT TOUCH THE VARIABLES BELOW !!!!!!!!!!!!
export PREINST1="vim git curl wget pip apt-transport-https nfs-common"
export PREINST2="gnupg2 software-properties-common ca-certificates"
export DOCKERINST="containerd.io docker-ce docker-ce-cli"
export KUBEVERSION="1.21.5-00"
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

# Step 1 , preinstall
echo -e "Preinstalling \v packets\v"
sudo apt update
sudo apt -y install $PREINST1
clear
# --------------------------------------------------------

# Step 2 , Kubernets install 
echo -e "Installing kubernetes\v"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt-get install -y kubelet=$KUBEVERSION kubectl=$KUBEVERSION kubeadm=$KUBEVERSION
sudo apt-mark hold kubelet kubeadm kubectl
clear
# --------------------------------------------------------

# Step 3, turn off swap
echo -e "Turning\v off\v SWAPu\v"
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
clear
# --------------------------------------------------------

# Step 4, OS Tuning
echo -e "Tuning\v OS\v"
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf&amp;lt;&amp;lt;EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
clear
# --------------------------------------------------------


# Step, 5 Installing more packets
echo -e "Installing \v packets 2 \v"
sudo apt install -y $PREINST2
clear
# --------------------------------------------------------


# Step 6, Docker install
echo -e "Installing \v Docker\v"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y $DOCKERINST
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo tee /etc/docker/daemon.json &amp;lt;&amp;lt;EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
clear
# --------------------------------------------------------

# Step 7, create kubernetes cluster
echo -e "Rozpoczynam\v konfigurajce\v Kubernetes\v"
sudo systemctl enable kubelet
sudo kubeadm config images pull
sudo kubeadm init --pod-network-cidr=$NETWORKCIDR --apiserver-advertise-address=$NETWORKADDR &amp;gt;&amp;gt; $KUBEJOIN
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=$HOME/.kube/config" | tee -a ~/.bashrc
clear
# --------------------------------------------------------


# Step 8, Install kubernetes networking - calico and metallb
echo -e "Kubernets \v configuration\v"
wget https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.3/manifests/metallb.yaml
sudo tee metallbcm.yml &amp;lt;&amp;lt; EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - $METALLBADDR
EOF
kubectl create -f metallbcm.yml
clear
# --------------------------------------------------------

# Step 9, Install helm
echo -e "Installing\v HELM\v"
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
clear
# --------------------------------------------------------

# Step 10, Install ingress
echo -e "Ingress\v Nginx 0.43 \v"
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/baremetal/deploy.yaml
sed -i 's/NodePort/LoadBalancer/g' deploy.yaml
kubectl create ns ingress-nginx
kubectl apply -f deploy.yaml
kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=180s
clear
# --------------------------------------------------------

# Step 11, Install and configure NFS Server 

echo -e "NFS \v"
sudo apt -y install nfs-kernel-server
sudo cat /proc/fs/nfsd/versions
sudo mkdir -p /srv/nfs4/nfs-share
sudo mkdir -p $STORAGEFOLDER
sudo mount --bind $STORAGEFOLDER /srv/nfs4/nfs-share
sudo echo "$STORAGEFOLDER /srv/nfs4/nfs-share  none   bind   0   0" &amp;gt;&amp;gt; /etc/fstab
sudo mount -a
sudo ufw allow from $NFSNETWORK to any port nfs
sudo echo "/srv/nfs4/nfs-share         $NFSRULES" &amp;gt;&amp;gt; /etc/exports
sudo chmod 777 -R $STORAGEFOLDER
sudo exportfs -ar
sudo exportfs -v
sudo systemctl restart nfs-server
sleep 1m
clear
# --------------------------------------------------------

# Step 12, Install NFS Subdir External Provisoiner
kubectl create ns $SASNAMESPACE
echo -e "NFS \v Subdir \v  Subdir \v External \v Provisioner"
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=$NFSSERVER \
    --set nfs.path=/srv/nfs4/nfs-share \
	--set storageClass.defaultClass=true \
	--set storageClass.accessModes=ReadWriteMany \
	--namespace $SASNAMESPACE

# --------------------------------------------------------

# Step 13, Creating pod security policies
echo -e "PSP\v"

tee psp-privileged.yaml &amp;lt;&amp;lt; EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: privileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  privileged: true
  allowPrivilegeEscalation: true
  allowedCapabilities:
  - '*'
  volumes:
  - '*'
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  hostIPC: true
  hostPID: true
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
EOF


tee psp-baseline.yaml &amp;lt;&amp;lt; EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: baseline
  annotations:
    # Optional: Allow the default AppArmor profile, requires setting the default.
    apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
    apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  privileged: false
  # The moby default capability set, minus NET_RAW
  allowedCapabilities:
    - 'CHOWN'
    - 'DAC_OVERRIDE'
    - 'FSETID'
    - 'FOWNER'
    - 'MKNOD'
    - 'SETGID'
    - 'SETUID'
    - 'SETFCAP'
    - 'SETPCAP'
    - 'NET_BIND_SERVICE'
    - 'SYS_CHROOT'
    - 'KILL'
    - 'AUDIT_WRITE'
  # Allow all volume types except hostpath
  volumes:
    # 'core' volume types
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    # Assume that ephemeral CSI drivers &amp;amp; persistentVolumes set up by the cluster admin are safe to use.
    - 'csi'
    - 'persistentVolumeClaim'
    - 'ephemeral'
    # Allow all other non-hostpath volume types.
    - 'awsElasticBlockStore'
    - 'azureDisk'
    - 'azureFile'
    - 'cephFS'
    - 'cinder'
    - 'fc'
    - 'flexVolume'
    - 'flocker'
    - 'gcePersistentDisk'
    - 'gitRepo'
    - 'glusterfs'
    - 'iscsi'
    - 'nfs'
    - 'photonPersistentDisk'
    - 'portworxVolume'
    - 'quobyte'
    - 'rbd'
    - 'scaleIO'
    - 'storageos'
    - 'vsphereVolume'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  readOnlyRootFilesystem: false
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    # This policy assumes the nodes are using AppArmor rather than SELinux.
    # The PSP SELinux API cannot express the SELinux Pod Security Standards,
    # so if using SELinux, you must choose a more restrictive default.
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
EOF

tee psp-restricted.yaml &amp;lt;&amp;lt; EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
    apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
    apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
spec:
  privileged: false
  # Required to prevent escalations to root.
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  # Allow core volume types.
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    # Assume that ephemeral CSI drivers &amp;amp; persistentVolumes set up by the cluster admin are safe to use.
    - 'csi'
    - 'persistentVolumeClaim'
    - 'ephemeral'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    # Require the container to run without root privileges.
    rule: 'MustRunAsNonRoot'
  seLinux:
    # This policy assumes the nodes are using AppArmor rather than SELinux.
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false
EOF

kubectl apply -f psp-restricted.yaml
kubectl apply -f psp-baseline.yaml
kubectl apply -f psp-privileged.yaml

kubectl get psp restricted -o custom-columns=NAME:.metadata.name,"SECCOMP":".metadata.annotations.seccomp\.security\.alpha\.kubernetes\.io/allowedProfileNames"
clear
# --------------------------------------------------------

# Step 14, check if NFS works
sudo tee test-pod.yaml &amp;lt;&amp;lt; EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs-client
  resources:
    requests:
      storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS &amp;amp;&amp;amp; exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF

kubectl apply -f test-pod.yaml -n sasoperator
sleep 1m
kubectl describe pod test-pod -n sasoperator
clear
# --------------------------------------------------------

# Step 15, SCP files to workers

scp $KUBEJOIN $KUBEJOINREMOTE1
scp $KUBEJOIN $KUBEJOINREMOTE2
scp $KUBEJOIN $KUBEJOINREMOTE3

# --------------------------------------------------------&lt;/PRE&gt;
&lt;P&gt;1st Script on each worker :&lt;/P&gt;
&lt;PRE&gt;#!/bin/bash

# This script is designed and released by Gabos Software, feel free to use it and make it better


## Configuration section, it has important variables in order to prepare whole environment. 
# It's hardcoded for environemnt like :
# 1. VMKUB01 Master node : Ubuntu 20.04 LTS, 500 GB Storage Space, 24 GB RAM, 12 vCPU
# 2. VMKUB02 Worker node 1 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 3. VMKUB03 Worker node 2 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 4. VMKUB03 Worker node 3 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# I prepared all OSes with user account called "gabos"

# HOST
export KUBEJOIN="/home/gabos/KUBEJOIN.log" # local path to kubejoin file (scped before)

# !!!!!!!!!!! DO NOT TOUCH THE VARIABLES BELOW !!!!!!!!!!!!
export PREINST1="vim git curl wget pip apt-transport-https nfs-common"
export PREINST2="gnupg2 software-properties-common ca-certificates"
export DOCKERINST="containerd.io docker-ce docker-ce-cli"
export KUBEVERSION="1.21.5-00"
# -----------------------------------------------------------------

# Step 1 , preinstall
echo -e "Preinstall\v 1\v"
sudo apt update
sudo apt -y install $PREINST1
clear


# Step 2 , Kubernets install 
echo -e "Installing kubernetes\v"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt-get install -y kubelet=$KUBEVERSION kubectl=$KUBEVERSION kubeadm=$KUBEVERSION
sudo apt-mark hold kubelet kubeadm kubectl
clear
# --------------------------------------------------------


# Step 3, turn off swap
echo -e "Turning\v off\v SWAPu\v"
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
clear
# --------------------------------------------------------


# Step 4, OS Tuning
echo -e "Tuning\v OS\v"
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf&amp;lt;&amp;lt;EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
clear
# --------------------------------------------------------


# Step, 5 Installing more packets
echo -e "Installing \v packets 2 \v"
sudo apt install -y $PREINST2
clear
# --------------------------------------------------------


# Step 6, Docker install
echo -e "Installing \v Docker\v"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y $DOCKERINST
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo tee /etc/docker/daemon.json &amp;lt;&amp;lt;EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
clear
# --------------------------------------------------------

# Step 7, joining kubernetes cluster
echo -e "Joining \v kuberenets cluster\v"
sudo systemctl enable kubelet
cat $KUBEJOIN | grep 'kubeadm join' -A 1$
export JOINCMD=$(cat $KUBEJOIN | grep 'kubeadm join' -A 1)
export JOINCMD=$(echo $JOINCMD | sed  's/\\//g')
sudo $JOINCMD

# --------------------------------------------------------&lt;/PRE&gt;
&lt;P&gt;2nd Script on master node :&lt;/P&gt;
&lt;PRE&gt;#!/bin/bash

# This script is designed and released by Gabos Software, feel free to use it and make it better


## Configuration section, it has important variables in order to prepare whole environment. 
# It's hardcoded for environemnt like :
# 1. VMKUB01 Master node : Ubuntu 20.04 LTS, 500 GB Storage Space, 24 GB RAM, 12 vCPU
# 2. VMKUB02 Worker node 1 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 3. VMKUB03 Worker node 2 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# 4. VMKUB03 Worker node 3 : Ubuntu 20.04 LTS, 200 GB Storage Space, 24 GB RAM, 12 vCPU
# I prepared all OSes with user account called "gabos"

export NAZWADNS="vmkub01.local" # This is the dns name by that you want to connect to your website , you will access SAS Viya only by this name like https://vmkub01.local , remember that  this name should resolv loadbalancer IP address at the host station connection to Viya


# Config
export SCIEZKA="/home/gabos" # work folder where are all the files
export PLIKDEPLOY="SASViyaV4_9XXXX_0_stable_2021.1.6_20211020.1634741786565_deploymentAssets_2021-10-21T071710.tgz" # tgz deploy file
export PLIKLICENCJA="SASViyaV4_9XXXX_0_stable_2021.1.6_license_2021-10-21T071701.jwt" # license file
export PLIKCERTS="SASViyaV4_9XXXX_certs.zip" # certs file
export SASNAMESPACE="sasoperator" # this is namespace for sasoperator and sas deployment, you shouldn't change it
export CADENCE="stable" # installation type, it could be lts or stable, read about it in documentation or just copy from deploy file name
export CADENCEVERSION="2021.1.6" # full version description, read about it in documentation or just copy from deploy file name
# all of the files should be in path called "SCIEZKA"

# Step 1, Install Kustomize
snap install kustomize

# Step 2, Install CertManager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.5.4 \
  --set installCRDs=true

# Step 3, Create CA and Issuer with CertManager for full-TLS deployment
cd $SCIEZKA
kubectl create namespace sandbox

tee CAAuthority.yaml &amp;lt;&amp;lt; EOF
apiVersion: v1
kind: Namespace
metadata:
  name: sandbox
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: my-selfsigned-ca
  namespace: sandbox
spec:
  isCA: true
  commonName: my-selfsigned-ca
  secretName: root-secret
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: selfsigned-issuer
    kind: ClusterIssuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: sas-viya-issuer
  namespace: sandbox
spec:
  ca:
    secretName: root-secret
EOF

kubectl apply -f CAAuthority.yaml -n sandbox

# Step 4, deploying operator
cd $SCIEZKA
mkdir operator-deploy
cp $PLIKDEPLOY operator-deploy
cd operator-deploy
tar xvfz $PLIKDEPLOY
cp -r sas-bases/examples/deployment-operator/deploy/* .
chmod +w site-config/transformer.yaml
# Zmiana wartosci dla namespace i clusterbinding
sed -i 's/{{ NAME-OF-CLUSTERROLEBINDING }}/sasoperator/g' site-config/transformer.yaml
sed -i 's/{{ NAME-OF-NAMESPACE }}/sasoperator/g' site-config/transformer.yaml
kustomize build . | kubectl -n sasoperator apply -f -
kubectl get all -n sasoperator
# ------------------

# Step 5, building directory structure
cd $SCIEZKA
mkdir deploy
cp $PLIKDEPLOY deploy
cd deploy
tar xvfz $PLIKDEPLOY
rm -rf $PLIKDEPLOY
mkdir site-config
# -------------------------


# Step 6, Installation
cd $SCIEZKA
cd deploy
tee kustomization.yaml &amp;lt;&amp;lt; EOF
namespace: {{ NAME-OF-NAMESPACE }} 
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer 
- sas-bases/overlays/network/networking.k8s.io 
- sas-bases/overlays/cas-server
- sas-bases/overlays/internal-postgres
# If your deployment contains programming-only offerings only, comment out the next line
- sas-bases/overlays/internal-elasticsearch
- sas-bases/overlays/update-checker
- sas-bases/overlays/cas-server/auto-resources 
configurations:
- sas-bases/overlays/required/kustomizeconfig.yaml
transformers:
# If your deployment does not support privileged containers or if your deployment
# contains programming-only offerings, comment out the next line 
- sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml
- sas-bases/overlays/required/transformers.yaml
- site-config/security/cert-manager-provided-ingress-certificate.yaml 
- sas-bases/overlays/cas-server/auto-resources/remove-resources.yaml 
# If your deployment contains programming-only offerings only, comment out the next line
- sas-bases/overlays/internal-elasticsearch/internal-elasticsearch-transformer.yaml
# Mount information
# - site-config/{{ DIRECTORY-PATH }}/cas-add-host-mount.yaml
components:
- sas-bases/components/security/core/base/full-stack-tls 
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls 
patches:
- path: site-config/storageclass.yaml 
  target:
    kind: PersistentVolumeClaim
    annotationSelector: sas.com/component-name in (sas-backup-job,sas-data-quality-services,sas-commonfiles,sas-cas-operator,sas-pyconfig)
# License information
# secretGenerator:
# - name: sas-license
#   type: sas.com/license
#   behavior: merge
#   files:
#   - SAS_LICENSE=license.jwt
configMapGenerator:
- name: ingress-input
  behavior: merge
  literals:
  - INGRESS_HOST={{ NAME-OF-INGRESS-HOST }}
- name: sas-shared-config
  behavior: merge
  literals:
  - SAS_SERVICES_URL=https://{{ NAME-OF-INGRESS-HOST }}:{{ PORT }} 
  # - SAS_URL_EXTERNAL_VIYA={{ EXTERNAL-PROXY-URL }}
EOF

# Change {{ NAME-OF-NAMESPACE }}
sed -i 's/{{ NAME-OF-NAMESPACE }}/sasoperator/g' kustomization.yaml
# Del auto-resources
sed -i '/auto-resources/d' kustomization.yaml 

# Ingress
export INGRESS_HOST=$(kubectl -n ingress-nginx get service ingress-nginx-controller  -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
sed -i "s/{{ NAME-OF-INGRESS-HOST }}/$NAZWADNS/g" kustomization.yaml
export INGRESS_HTTPS_PORT=$(kubectl -n ingress-nginx get service ingress-nginx-controller  -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
sed -i "s/{{ PORT }}/$INGRESS_HTTPS_PORT/g" kustomization.yaml


# TLS
cd $SCIEZKA
cd deploy
mkdir site-config/security
cp sas-bases/examples/security/cert-manager-provided-ingress-certificate.yaml site-config/security/cert-manager-provided-ingress-certificate.yaml
sed -i "s/{{ CERT-MANAGER-ISSUER-NAME }}/sas-viya-issuer/g" site-config/security/cert-manager-provided-ingress-certificate.yaml
sed -i "s/{{ CERT-MANAGER_ISSUER_NAME }}/sas-viya-issuer/g" site-config/security/cert-manager-provided-ingress-certificate.yaml 

# StorageClass
cd $SCIEZKA
cd deploy
export STORAGECLASS=$(kubectl get storageclass -o jsonpath='{.items[*].metadata.name}')
tee site-config/storageclass.yaml &amp;lt;&amp;lt; EOF
kind: RWXStorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
 name: wildcard
spec:
 storageClassName: $STORAGECLASS
EOF


# License
cd $SCIEZKA
mkdir license
cp $PLIKLICENCJA license

# SAS Orchestration
docker pull cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.64.0-20211012.1634057996496
docker tag cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.64.0-20211012.1634057996496 sas-orchestration

mkdir /home/user
mkdir /home/user/kubernetes
cp $HOME/.kube/config /home/user/kubernetes/config
chmod 777 /home/user/kubernetes/config

kubectl get psp restricted -o custom-columns=NAME:.metadata.name,"SECCOMP":".metadata.annotations.seccomp\.security\.alpha\.kubernetes\.io/allowedProfileNames"


cd $SCIEZKA

# Install 
docker run --rm \
  -v $(pwd):/tmp/files \
  sas-orchestration \
  create sas-deployment-cr \
  --deployment-data /tmp/files/$PLIKCERTS \
  --license /tmp/files/$PLIKLICENCJA \
  --user-content /tmp/files/deploy \
  --cadence-name $CADENCE \
  --cadence-version $CADENCEVERSION \
 &amp;gt; viya4-sasdeployment.yaml

kubectl apply -f viya4-sasdeployment.yaml -n sasoperator

kubectl -n sasoperator get sasdeployment
&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 07 Nov 2021 23:32:28 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779037#M23385</guid>
      <dc:creator>N224</dc:creator>
      <dc:date>2021-11-07T23:32:28Z</dc:date>
    </item>
    <item>
      <title>Re: [VIYA4] Deploy Viya 4 Anywhere - Tutorial, Script, Locally or in cloud :)</title>
      <link>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779125#M23389</link>
      <description>Very cool! A couple things I noticed:&lt;BR /&gt;We currently require kustomize version 3.7.0, so your "snap install kustomize" might need to instead pull and extract &lt;A href="https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv3.7.0/kustomize_v3.7.0_linux_amd64.tar.gz" target="_blank"&gt;https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv3.7.0/kustomize_v3.7.0_linux_amd64.tar.gz&lt;/A&gt; into your PATH (it doesn't look like kustomize 3.7.0 is listed as a snap release).&lt;BR /&gt;You are creating a cert-manager issuer "sas-viya-issuer" in your sandbox namespace. The assets in sas-bases/overlays/cert-manager-issuer also creates an issuer by that name (tied to the sas-viya-self-signing-issuer) so this may be confusing.</description>
      <pubDate>Mon, 08 Nov 2021 13:57:05 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779125#M23389</guid>
      <dc:creator>gwootton</dc:creator>
      <dc:date>2021-11-08T13:57:05Z</dc:date>
    </item>
    <item>
      <title>Re: [VIYA4] Deploy Viya 4 Anywhere - Tutorial, Script, Locally or in cloud :)</title>
      <link>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779138#M23391</link>
      <description>I will look into documentation and verify scripts, if some steps should be changed - I'll update it &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;</description>
      <pubDate>Mon, 08 Nov 2021 14:58:26 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779138#M23391</guid>
      <dc:creator>N224</dc:creator>
      <dc:date>2021-11-08T14:58:26Z</dc:date>
    </item>
    <item>
      <title>Re: [VIYA4] Deploy Viya 4 Anywhere - Tutorial, Script, Locally or in cloud :)</title>
      <link>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779734#M23393</link>
      <description>&lt;P&gt;Hey &lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/78975"&gt;@gwootton&lt;/a&gt;&amp;nbsp;, I've reviewed configuration and :&lt;/P&gt;
&lt;P&gt;1. Line with kubectl apply -f CAAtuhority.yaml could be disabled in order to prevent creating sandbox namespace with custom CA and sas-viya-issuer&lt;/P&gt;
&lt;P&gt;2. I haven't found any official clues about kustomize version, where did you find that ? I looked through 2021.1.6 deployment guide and found none.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In some days I will open project on gitlab with scripts &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 11 Nov 2021 08:33:17 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779734#M23393</guid>
      <dc:creator>N224</dc:creator>
      <dc:date>2021-11-11T08:33:17Z</dc:date>
    </item>
    <item>
      <title>Re: [VIYA4] Deploy Viya 4 Anywhere - Tutorial, Script, Locally or in cloud :)</title>
      <link>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779791#M23395</link>
      <description>Here's the documentation link for 2021.1.6 showing a kustomize version requirement of 3.7.0:&lt;BR /&gt;&lt;BR /&gt;SAS Viya Operations - Virtual Infrastructure Requirements - Kubernetes Client Machine Requirements&lt;BR /&gt;&lt;A href="https://go.documentation.sas.com/doc/en/itopscdc/v_019/itopssr/n1ika6zxghgsoqn1mq4bck9dx695.htm#n0u8dut20wmtp3n1jukmbg0dmim5" target="_blank"&gt;https://go.documentation.sas.com/doc/en/itopscdc/v_019/itopssr/n1ika6zxghgsoqn1mq4bck9dx695.htm#n0u8dut20wmtp3n1jukmbg0dmim5&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Nov 2021 13:33:18 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Administration-and-Deployment/VIYA4-Deploy-Viya-4-Anywhere-Tutorial-Script-Locally-or-in-cloud/m-p/779791#M23395</guid>
      <dc:creator>gwootton</dc:creator>
      <dc:date>2021-11-11T13:33:18Z</dc:date>
    </item>
  </channel>
</rss>

