BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
N224
Obsidian | Level 7

Hello mates, I'm developing script deploying whole Kubernetes enviroment + SAS Viya4  locally on one machine. I've done lot of effort to make it work as desired. Finally script is close to be ready and of course like other my "hardcore tasks" to by shared with you.

I know that there is limited support from SAS on custom made clusters so I'm asking you - fellow community for help.

 

I know that there is limited support from SAS on custom made clusters so I'm asking you - fellow community for help.

The problem I'm facing is that 99% of my pods have "sas-consul-client secret not found", and of course I researched https://support.sas.com/kb/67/349.html , created specified psps and still .. doesn't work.
The biggest mystery is... that all the yamls in site-config, sas-bases, sas-orchestra which are used to deploy ... don't contain creating this secret. There are only SecRefs 😐

Ok, here's my environment:

1. Installed kubelet=$KUBEVERSION kubectl=$KUBEVERSION kubeadm=$KUBEVERSION where KUBEVERSION="1.21.5-00"
2. Turned swap off
3. Installed docker
4. Inited cluster with kubeadm init --pod-network-cidr=$NETWORKCIDR --apiserver-advertise-address=$NETWORKADDR where NETWORKCIDR="192.168.0.0/16" && NETWORKADDR="10.0.110.99"
5. Created PSPs :

 

 

- psp-privileged.yaml << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: privileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  privileged: true
  allowPrivilegeEscalation: true
  allowedCapabilities:
  - '*'
  volumes:
  - '*'
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  hostIPC: true
  hostPID: true
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
EOF

psp-baseline.yaml << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: baseline
  annotations:
    # Optional: Allow the default AppArmor profile, requires setting the default.
    apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
    apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  privileged: false
  # The moby default capability set, minus NET_RAW
  allowedCapabilities:
    - 'CHOWN'
    - 'DAC_OVERRIDE'
    - 'FSETID'
    - 'FOWNER'
    - 'MKNOD'
    - 'SETGID'
    - 'SETUID'
    - 'SETFCAP'
    - 'SETPCAP'
    - 'NET_BIND_SERVICE'
    - 'SYS_CHROOT'
    - 'KILL'
    - 'AUDIT_WRITE'
  # Allow all volume types except hostpath
  volumes:
    # 'core' volume types
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
    - 'csi'
    - 'persistentVolumeClaim'
    - 'ephemeral'
    # Allow all other non-hostpath volume types.
    - 'awsElasticBlockStore'
    - 'azureDisk'
    - 'azureFile'
    - 'cephFS'
    - 'cinder'
    - 'fc'
    - 'flexVolume'
    - 'flocker'
    - 'gcePersistentDisk'
    - 'gitRepo'
    - 'glusterfs'
    - 'iscsi'
    - 'nfs'
    - 'photonPersistentDisk'
    - 'portworxVolume'
    - 'quobyte'
    - 'rbd'
    - 'scaleIO'
    - 'storageos'
    - 'vsphereVolume'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  readOnlyRootFilesystem: false
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    # This policy assumes the nodes are using AppArmor rather than SELinux.
    # The PSP SELinux API cannot express the SELinux Pod Security Standards,
    # so if using SELinux, you must choose a more restrictive default.
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
EOF

psp-restricted.yaml << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
    apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
    apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
spec:
  privileged: false
  # Required to prevent escalations to root.
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  # Allow core volume types.
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
    - 'csi'
    - 'persistentVolumeClaim'
    - 'ephemeral'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    # Require the container to run without root privileges.
    rule: 'MustRunAsNonRoot'
  seLinux:
    # This policy assumes the nodes are using AppArmor rather than SELinux.
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false
EOF

6. Installed Calico, Metallb, Ingress
7. Created NFS Server and NFS Subdir Provisioner
8. Downloaded and created ARK-Report (in attachment)
9. Installed cert-manager and custom CA autority

 

1. Created directory structure for sasoperator and sas deployment
2. Successfully deployed sasoperator
3. Created kustomization.yaml for site-deployment

 

namespace: {{ NAME-OF-NAMESPACE }} 
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer 
- sas-bases/overlays/network/networking.k8s.io 
- sas-bases/overlays/cas-server
- sas-bases/overlays/internal-postgres
# If your deployment contains programming-only offerings only, comment out the next line
- sas-bases/overlays/internal-elasticsearch
- sas-bases/overlays/update-checker
- sas-bases/overlays/cas-server/auto-resources 
configurations:
- sas-bases/overlays/required/kustomizeconfig.yaml
transformers:
# If your deployment does not support privileged containers or if your deployment
# contains programming-only offerings, comment out the next line 
- sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml
- sas-bases/overlays/required/transformers.yaml
- site-config/security/cert-manager-provided-ingress-certificate.yaml 
- sas-bases/overlays/cas-server/auto-resources/remove-resources.yaml 
# If your deployment contains programming-only offerings only, comment out the next line
- sas-bases/overlays/internal-elasticsearch/internal-elasticsearch-transformer.yaml
# Mount information
# - site-config/{{ DIRECTORY-PATH }}/cas-add-host-mount.yaml
components:
- sas-bases/components/security/core/base/full-stack-tls 
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls 
patches:
- path: site-config/storageclass.yaml 
  target:
    kind: PersistentVolumeClaim
    annotationSelector: sas.com/component-name in (sas-backup-job,sas-data-quality-services,sas-commonfiles,sas-cas-operator,sas-pyconfig)
# License information
# secretGenerator:
# - name: sas-license
#   type: sas.com/license
#   behavior: merge
#   files:
#   - SAS_LICENSE=license.jwt
configMapGenerator:
- name: ingress-input
  behavior: merge
  literals:
  - INGRESS_HOST={{ NAME-OF-INGRESS-HOST }}
- name: sas-shared-config
  behavior: merge
  literals:
  - SAS_SERVICES_URL=https://{{ NAME-OF-INGRESS-HOST }}:{{ PORT }} 
  # - SAS_URL_EXTERNAL_VIYA={{ EXTERNAL-PROXY-URL }}
EOF

Changed values and finally kustomization.yaml looks like :

 

namespace: sasoperator
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/networking.k8s.io
- sas-bases/overlays/cas-server
- sas-bases/overlays/internal-postgres
# If your deployment contains programming-only offerings only, comment out the next line
- sas-bases/overlays/internal-elasticsearch
- sas-bases/overlays/update-checker
configurations:
- sas-bases/overlays/required/kustomizeconfig.yaml
transformers:
# If your deployment does not support privileged containers or if your deployment
# contains programming-only offerings, comment out the next line
- sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml
- sas-bases/overlays/required/transformers.yaml
- site-config/security/cert-manager-provided-ingress-certificate.yaml
# If your deployment contains programming-only offerings only, comment out the next line
- sas-bases/overlays/internal-elasticsearch/internal-elasticsearch-transformer.yaml
# Mount information
# - site-config/{{ DIRECTORY-PATH }}/cas-add-host-mount.yaml
components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls
patches:
- path: site-config/storageclass.yaml
  target:
    kind: PersistentVolumeClaim
    annotationSelector: sas.com/component-name in (sas-backup-job,sas-data-quality-services,sas-commonfiles,sas-cas-operator,sas-pyconfig)
# License information
# secretGenerator:
# - name: sas-license
#   type: sas.com/license
#   behavior: merge
#   files:
#   - SAS_LICENSE=license.jwt
configMapGenerator:
- name: ingress-input
  behavior: merge
  literals:
  - INGRESS_HOST=vmkub01.local
- name: sas-shared-config
  behavior: merge
  literals:
  - SAS_SERVICES_URL=https://vmkub01.local:443
  # - SAS_URL_EXTERNAL_VIYA={{ EXTERNAL-PROXY-URL }}

Pulled ark
docker pull cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.64.0-20211012.1634057996496
docker tag cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.64.0-20211012.1634057996496 sas-orchestration

created .yaml deployment with ark

 

docker run --rm \
  -v $(pwd):/tmp/files \
  sas-orchestration \
  create sas-deployment-cr \
  --deployment-data /tmp/files/$CERTS \
  --license /tmp/files/$LICENCE \
  --user-content /tmp/files/deploy \
  --cadence-name $CADENCE \
  --cadence-version $CADENCEVERSION \
 > viya4-sasdeployment.yaml

where

export LICENCE="SASViyaV4_9xxxx_0_stable_2021.1.6_license_2021-10-21T071701.jwt"
export CERTS="SASViyaV4_9xxxx_certs.zip"
export SASNAMESPACE="sasoperator"
export CADENCE="stable"
export CADENCEVERSION="2021.1.6"

and file looks like (without certs and license ofc):

 

 

---
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: null
  name: sas-viya
stringData:
  cacert: |
    -----BEGIN CERTIFICATE-----
xxx
    -----END CERTIFICATE-----
  cert: |
    -----BEGIN RSA PRIVATE KEY-----
xxxx
    -----END RSA PRIVATE KEY-----
    -----BEGIN CERTIFICATE-----
xxx
    -----END CERTIFICATE-----
  license:
xxx
---
apiVersion: orchestration.sas.com/v1alpha1
kind: SASDeployment
metadata:
  annotations:
    operator.sas.com/checksum: ""
  creationTimestamp: null
  name: sas-viya
spec:
  caCertificate:
    secretKeyRef:
      key: cacert
      name: sas-viya
  cadenceName: stable
  cadenceVersion: 2021.1.6
  clientCertificate:
    secretKeyRef:
      key: cert
      name: sas-viya
  license:
    secretKeyRef:
      key: license
      name: sas-viya
  repositoryWarehouse:
    updatePolicy: Never
  userContent:
    files:
      kustomization.yaml: |
        namespace: sasoperator
        resources:
        - sas-bases/base
        - sas-bases/overlays/cert-manager-issuer
        - sas-bases/overlays/network/networking.k8s.io
        - sas-bases/overlays/cas-server
        - sas-bases/overlays/internal-postgres
        # If your deployment contains programming-only offerings only, comment out the next line
        - sas-bases/overlays/internal-elasticsearch
        - sas-bases/overlays/update-checker
        configurations:
        - sas-bases/overlays/required/kustomizeconfig.yaml
        transformers:
        # If your deployment does not support privileged containers or if your deployment
        # contains programming-only offerings, comment out the next line
        - sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml
        - sas-bases/overlays/required/transformers.yaml
        - site-config/security/cert-manager-provided-ingress-certificate.yaml
        # If your deployment contains programming-only offerings only, comment out the next line
        - sas-bases/overlays/internal-elasticsearch/internal-elasticsearch-transformer.yaml
        # Mount information
        # - site-config/{{ DIRECTORY-PATH }}/cas-add-host-mount.yaml
        components:
        - sas-bases/components/security/core/base/full-stack-tls
        - sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls
        patches:
        - path: site-config/storageclass.yaml
          target:
            kind: PersistentVolumeClaim
            annotationSelector: sas.com/component-name in (sas-backup-job,sas-data-quality-services,sas-commonfiles,sas-cas-operator,sas-pyconfig)
        # License information
        # secretGenerator:
        # - name: sas-license
        #   type: sas.com/license
        #   behavior: merge
        #   files:
        #   - SAS_LICENSE=license.jwt
        configMapGenerator:
        - name: ingress-input
          behavior: merge
          literals:
          - INGRESS_HOST=vmkub01.local
        - name: sas-shared-config
          behavior: merge
          literals:
          - SAS_SERVICES_URL=https://vmkub01.local:443
          # - SAS_URL_EXTERNAL_VIYA={{ EXTERNAL-PROXY-URL }}
      sas-bases: ""
      site-config/security/cert-manager-provided-ingress-certificate.yaml: "## Example PatchTransformer to patch the secret used by nginx ingress objects\n##\n## In the following code, the locations that require user specified values are indicated by a capitalized and\n## hyphenated name set off by curly braces and a space at each end. You should replace this token with the \n## actual value.\n## Replace the curly braces, interior spaces, and the variable name.\n## For instance, \"sas-viya-issuer\"\n## should be replaced with the name of the cert-manager issuer that will issue certificates used to make\n## TLS connections to the SAS Viya applications, such as sas-viya-issuer.\n## If you use the suggested example, the correct, final syntax would be:\n## value: sas-viya-issuer\n##\n##\n---\napiVersion: builtin\nkind: PatchTransformer\nmetadata:\n  name: sas-cert-manager-ingress-annotation-transformer\npatch: |-\n  - op: add\n    path: /metadata/annotations/cert-manager.io~1issuer\n    value: sas-viya-issuer # name of the cert-manager issuer that will supply the Ingress cert, such as sas-viya-issuer\ntarget:\n  kind: Ingress\n  name: .*"
      site-config/storageclass.yaml: |
        kind: RWXStorageClass
        metadata:
         name: wildcard
        spec:
         storageClassName: nfs-client

then :

kubectl apply -f viya4-sasdeployment.yaml -n sasoperator

finally got :

NAME       STATE       CADENCENAME   CADENCEVERSION   CADENCERELEASE           AGE
sas-viya   SUCCEEDED   stable        2021.1.6         20211029.1635519350329   7h8m

but:

sas-model-management-c894776d9-dsvww                   0/1     Init:CreateContainerConfigError   0          7h53m
sas-model-manager-app-795848d4db-f9tx5                 0/1     CreateContainerConfigError        0          7h53m
sas-model-publish-f4f9f5d7d-c7hrc                      0/1     Init:CreateContainerConfigError   0          7h53m
sas-model-repository-86f4b6cb47-gwddw                  0/1     CreateContainerConfigError        0          7h53m
sas-model-studio-app-5bd79dfdb-7kwxz                   0/1     CreateContainerConfigError        0          7h53m
sas-natural-language-conversations-576674865d-wssnw    0/1     CreateContainerConfigError        0          7h53m
sas-natural-language-generation-5bc9f5b9-28cq9         0/1     CreateContainerConfigError        0          7h53m
sas-natural-language-understanding-64d776d46b-j9wkt    0/1     CreateContainerConfigError        0          7h53m
sas-notifications-554875b7d5-99446                     0/1     Init:CreateContainerConfigError   0          7h53m
sas-office-addin-app-5fc7d68d96-v5rr7                  0/1     CreateContainerConfigError        0          7h53m
sas-opendistro-operator-56f45fb488-lsqkw               0/1     CreateContainerConfigError        0          7h53m
sas-parse-execution-provider-54b6567f59-rrhwv          0/1     CreateContainerConfigError        0          7h53m
sas-preferences-656d9cd848-87njx                       0/1     Init:CreateContainerConfigError   0          7h53m
sas-prepull-85c69b74c7-bhgsn                           1/1     Running                           0          7h54m
sas-projects-8955cf56f-v8tf4                           0/1     CreateContainerConfigError        0          7h53m
sas-pyconfig-j2zgm                                     0/1     Pending                           0          7h53m
sas-rabbitmq-server-0                                  0/1     Pending                           0          7h53m
sas-rabbitmq-server-1                                  0/1     Pending                           0          7h53m
sas-rabbitmq-server-2                                  0/1     Pending                           0          7h53m
sas-readiness-78955dc49d-mg8fp                         0/1     CreateContainerConfigError        0          7h53m
sas-report-distribution-6d6b55ddd5-67mq4               0/1     Init:CreateContainerConfigError   0          7h53m
sas-report-execution-7498cf5d86-rrlgz                  0/1     CreateContainerConfigError        0          7h53m
sas-report-renderer-6fd7d8d5-fkjcd                     0/1     CreateContainerConfigError        0          7h53m
sas-report-services-group-7d8564487d-txjmq             0/1     CreateContainerConfigError        0          7h53m
sas-scheduled-backup-job-27260700-fzmf9                0/2     Pending                           0          5h56m
sas-scheduler-776d8686c9-v7jml                         0/1     CreateContainerConfigError        0          7h53m
sas-score-definitions-6bf66dcfd-xn6gw                  0/1     Init:CreateContainerConfigError   0          7h53m
sas-score-execution-8d96cf55b-twvqk                    0/1     Pending                           0          7h53m

and all the pods with errors are like this (describe pod) :

  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  35m                  default-scheduler  Successfully assigned sasoperator/sas-conversation-designer-app-646ff4697c-k2xcf to vmkub01
  Normal   Pulling    35m                  kubelet            Pulling image "cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-certframe:3.20.7-20211015.1634318362435"
  Normal   Pulled     34m                  kubelet            Successfully pulled image "cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-certframe:3.20.7-20211015.1634318362435" in 59.23941659s
  Normal   Created    34m                  kubelet            Created container sas-certframe
  Normal   Started    34m                  kubelet            Started container sas-certframe
  Normal   Pulling    34m                  kubelet            Pulling image "cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-conversation-designer-app:2.9.1-20211013.1634156561133"
  Normal   Pulled     24m                  kubelet            Successfully pulled image "cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-conversation-designer-app:2.9.1-20211013.1634156561133" in 10m17.950160092s
  Warning  Failed     22m (x11 over 24m)   kubelet            Error: secret "sas-consul-client" not found
  Normal   Pulled     19s (x110 over 24m)  kubelet            Container image "cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-conversation-designer-app:2.9.1-20211013.1634156561133" already present on machine

Please help me get rid of this (error ofc :D) 

 

1 ACCEPTED SOLUTION

Accepted Solutions
N224
Obsidian | Level 7

Finally to get rid of this specific problem :

Creation of standalone NFS server should look like :

 

# NFS :
export STORAGEFOLDER="/home/saspodstorage" 
export NFSRULES="*(rw,sync,no_subtree_check,crossmnt,fsid=0)" # This is most important - world access to share
export NFSNETWORK="10.0.110.0/24" # It firewall network subnet for other hosts
export NFSSERVER="10.0.110.99" # NFS Server IP

Instalation :

 

# ---- NFS Server --------------------
sudo apt -y install nfs-kernel-server
sudo cat /proc/fs/nfsd/versions
sudo mkdir -p /srv/nfs4/nfs-share
sudo mkdir -p $STORAGEFOLDER
sudo mount --bind $STORAGEFOLDER /srv/nfs4/nfs-share
sudo echo "$STORAGEFOLDER /srv/nfs4/nfs-share  none   bind   0   0" >> /etc/fstab
sudo mount -a
sudo ufw allow from $NFSNETWORK to any port nfs
sudo echo "/srv/nfs4/nfs-share         $NFSRULES" >> /etc/exports
sudo chmod 777 -R $STORAGEFOLDER
sudo exportfs -ar
sudo exportfs -v
sudo systemctl restart nfs-server # It's important to restart service
sleep 1m 
clear
# --------------------------------------------------------

Provisioner (default RBAC is enabled, I got it to work without security tuning just placing it in the same namespace as sasoperator)

# ---- NFS Subdir  Subdir External Provisioner -
kubectl create ns sasoperator
echo -e "Rozpoczynam\v instlacje\v NFS \v Subdir \v  Subdir \v External \v Provisioner"
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=$NFSSERVER \
    --set nfs.path=/srv/nfs4/nfs-share \
	--set storageClass.defaultClass=true \
	--set storageClass.accessModes=ReadWriteMany \
	--namespace sasoperator
# --------------------------------------------------------

After giving it some time

tee test-pod.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs-client
  resources:
    requests:
      storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF

kubectl apply -f test-pod.yaml -n sasoperator
sleep 1m
kubectl describe pod test-pod -n sasoperator

And now sas-consul-client secret is out 🙂

Now I'm facing problem which theoreticly could stop whole idea of single machine deployment - limits of 110 pods per node in k8s..

😉

View solution in original post

14 REPLIES 14
alexal
SAS Employee

Did you go through the "Verify" steps of that SAS note? If yes, what was the output?

N224
Obsidian | Level 7
Could you pass me link? I can't find verify steps in deployment guide 😐
gwootton
SAS Super FREQ
Are the consul pods up and running (kubectl get pods -l app=sas-consul-server)? Are all your PVCs bound (kubectl get pvc)?
--
Greg Wootton | Principal Systems Technical Support Engineer
N224
Obsidian | Level 7

Hello there (general Kenobi:D)

N224_0-1635838118529.png

and the second one 

N224_1-1635838158591.png

it looks that all of the components are pending BUT , replying to private message from sas mate I've discovered that there is somekind space issue 

N224_2-1635838277739.png

I'm going to read about this

 

 

gwootton
SAS Super FREQ

You can see why a pod is pending using kubectl describe <pod>, in this case it is probably because the PVC for the consul servers is pending. The logs for your nfs-client storage provider might give some insight on why they are pending.

Probably "kubectl -n nfs-client get po" to get the the name of the nfs provisioner pod
then "kubectl -n nfs-client logs <pod_name> | less" to read that log.

$ kubectl -n nfs-client get po
NAME READY STATUS RESTARTS AGE
nfs-subdir-external-provisioner-75c47b46d9-9xvkp 1/1 Running 0 23h $ kubectl -n nfs-client logs nfs-subdir-external-provisioner-75c47b46d9-9xvkp | less

I'm not sure how you are creating your cluster, but those 100% occupied volumes appear to be for snaps (a wholly separate topic) which are essentially separate volumes for application images, I think it's normal for those to be 100%, see this link for more info on that path.

https://snapcraft.io/docs/system-snap-directory

--
Greg Wootton | Principal Systems Technical Support Engineer
gwootton
SAS Super FREQ

You can use something like this to test your provisioner:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs-client
  resources:
    requests:
      storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
--
Greg Wootton | Principal Systems Technical Support Engineer
N224
Obsidian | Level 7

Thanks mate, now I see the problem, it's with PersistentVolumeClaim, after that test it came up with :

root@vmkub01:~# kubectl describe pod test-pod -n sasoperator
Name:         test-pod
Namespace:    sasoperator
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  test-pod:
    Image:      gcr.io/google_containers/busybox:1.24
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
    Args:
      -c
      touch /mnt/SUCCESS && exit 0 || exit 1
    Environment:  <none>
    Mounts:
      /mnt from nfs-pvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fd9g8 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  nfs-pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  test-claim
    ReadOnly:   false
  kube-api-access-fd9g8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  39s (x2 over 40s)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

It look like 

root@vmkub01:~# kubectl describe pod sas-consul-server-0 -n sasoperator
Name:           sas-consul-server-0
Namespace:      sasoperator
Priority:       0
Node:           <none>
Labels:         app=sas-consul-server
                app.kubernetes.io/name=sas-consul-server
                controller-revision-hash=sas-consul-server-599b54cc66
                sas.com/deployment=sas-viya
                statefulset.kubernetes.io/pod-name=sas-consul-server-0
                workload.sas.com/class=stateful
Annotations:    prometheus.io/scheme: https
                sas.com/certificate-file-format: pem
                sas.com/component-name: sas-consul-server
                sas.com/component-version: 1.310006.0-20211014.1634217840806
                sas.com/kustomize-base: base
                sas.com/tls-enabled-ports: all
                sas.com/tls-mode: full-stack
                sas.com/version: 1.310006.0
                seccomp.security.alpha.kubernetes.io/pod: runtime/default
                sidecar.istio.io/inject: false
                sidecar.istio.io/proxyCPU: 15m
                sidecar.istio.io/proxyMemory: 115Mi
                traffic.sidecar.istio.io/excludeInboundPorts: 8301
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/sas-consul-server
Init Containers:
  sas-certframe:
    Image:      cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-certframe:3.20.7-20211015.1634318362435
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:     50m
      memory:  50Mi
    Environment Variables from:
      sas-certframe-config-2ch97fd95b                      ConfigMap  Optional: false
      sas-certframe-ingress-certificate-config-cmm2t44t88  ConfigMap  Optional: false
      sas-certframe-user-config-c4ch2c59m7                 ConfigMap  Optional: false
    Environment:
      KUBE_POD_NAME:                       sas-consul-server-0 (v1:metadata.name)
      SAS_CERTFRAME_TOKEN_DIR:             /certframe-token
      SAS_ADDITIONAL_CA_CERTIFICATES_DIR:  /customer-provided-ca-certificates
    Mounts:
      /certframe-token from certframe-token (rw)
      /customer-provided-ca-certificates from customer-provided-ca-certificates (rw)
      /security from security (rw)
  sas-certframe-client-token-generator:
    Image:      cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-certframe:3.20.7-20211015.1634318362435
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:     50m
      memory:  50Mi
    Environment:
      SAS_KEYS_SECRET_NAME:        sas-consul-client
      SAS_KEYS_KEY_NAMES:          CONSUL_HTTP_TOKEN
      SAS_SECURITY_ARTIFACTS_DIR:  /security
      SAS_CERTFRAME_TOKEN_DIR:     /certframe-token
    Mounts:
      /certframe-token from certframe-token (rw)
      /security from security (rw)
  sas-certframe-management-token-generator:
    Image:      cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-certframe:3.20.7-20211015.1634318362435
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:     50m
      memory:  50Mi
    Environment:
      SAS_KEYS_SECRET_NAME:        sas-consul-management
      SAS_KEYS_KEY_NAMES:          CONSUL_MANAGEMENT_TOKEN CONSUL_TOKENS_ENCRYPTION
      SAS_KEYS_KEY_TYPES:          uuid base64
      SAS_SECURITY_ARTIFACTS_DIR:  /security
      SAS_CERTFRAME_TOKEN_DIR:     /certframe-token
    Mounts:
      /certframe-token from certframe-token (rw)
      /security from security (rw)
Containers:
  sas-consul-server:
    Image:       cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-consul-server:1.310006.0-20211014.1634217840806
    Ports:       8300/TCP, 8301/TCP, 8301/UDP, 8500/TCP
    Host Ports:  0/TCP, 0/TCP, 0/UDP, 0/TCP
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:      250m
      memory:   150Mi
    Liveness:   exec [sh /opt/sas/viya/home/bin/consul-liveness-probe.sh] delay=45s timeout=1s period=30s #success=1 #failure=3
    Readiness:  exec [sh /opt/sas/viya/home/bin/consul-readiness-probe.sh] delay=45s timeout=1s period=30s #success=1 #failure=3
    Startup:    exec [sh /opt/sas/viya/home/bin/consul-startup-probe.sh] delay=45s timeout=1s period=30s #success=1 #failure=3
    Environment Variables from:
      sas-tls-config-f8ccd48c6m     ConfigMap  Optional: false
      sas-shared-config-9dh449kdkb  ConfigMap  Optional: false
      sas-consul-client             Secret     Optional: false
      sas-consul-management         Secret     Optional: false
      ingress-input-mfh55658f2      ConfigMap  Optional: false
    Environment:
      CONSUL_BOOTSTRAP_EXPECT:  3
      CONSUL_CLIENT_ADDRESS:    0.0.0.0
      CONSUL_DATACENTER_NAME:   viya
    Mounts:
      /consul/data from sas-viya-consul-data-volume (rw)
      /opt/sas/viya/config/etc/SASSecurityCertificateFramework/cacerts from security (rw,path="cacerts")
      /opt/sas/viya/config/etc/SASSecurityCertificateFramework/private from security (rw,path="private")
      /opt/sas/viya/config/etc/SASSecurityCertificateFramework/tokens/consul/default from tmp-volume (rw,path="consul-tokens")
      /opt/sas/viya/config/etc/consul.d from tmp-volume (rw,path="consul.d")
      /opt/sas/viya/config/etc/consul.d/default from sitedefault-vol (rw)
      /opt/sas/viya/config/tmp/sas-consul from tmp-volume (rw,path="sas-consul")
      /security from security (rw)
      /tmp from tmp-volume (rw,path="tmp")
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  sas-viya-consul-data-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  sas-viya-consul-data-volume-sas-consul-server-0
    ReadOnly:   false
  sitedefault-vol:
    Type:                Projected (a volume that contains injected data from multiple sources)
    ConfigMapName:       sas-consul-config-7m8mcgtm5c
    ConfigMapOptional:   <nil>
    SecretName:          sas-consul-config-6m98g47d77
    SecretOptionalName:  <nil>
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  certframe-token:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sas-certframe-token
    Optional:    false
  security:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  customer-provided-ca-certificates:
    Type:        ConfigMap (a volume populated by a ConfigMap)
    Name:        sas-customer-provided-ca-certificates-29kdmk686c
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
                 workload.sas.com/class=stateful:NoSchedule
                 workload.sas.com/class=stateless:NoSchedule
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  8s (x279 over 6h15m)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
root@vmkub01:~# kubectl describe pod sas-consul-server-0 -n sasoperator
Name:           sas-consul-server-0
Namespace:      sasoperator
Priority:       0
Node:           <none>
Labels:         app=sas-consul-server
                app.kubernetes.io/name=sas-consul-server
                controller-revision-hash=sas-consul-server-599b54cc66
                sas.com/deployment=sas-viya
                statefulset.kubernetes.io/pod-name=sas-consul-server-0
                workload.sas.com/class=stateful
Annotations:    prometheus.io/scheme: https
                sas.com/certificate-file-format: pem
                sas.com/component-name: sas-consul-server
                sas.com/component-version: 1.310006.0-20211014.1634217840806
                sas.com/kustomize-base: base
                sas.com/tls-enabled-ports: all
                sas.com/tls-mode: full-stack
                sas.com/version: 1.310006.0
                seccomp.security.alpha.kubernetes.io/pod: runtime/default
                sidecar.istio.io/inject: false
                sidecar.istio.io/proxyCPU: 15m
                sidecar.istio.io/proxyMemory: 115Mi
                traffic.sidecar.istio.io/excludeInboundPorts: 8301
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/sas-consul-server
Init Containers:
  sas-certframe:
    Image:      cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-certframe:3.20.7-20211015.1634318362435
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:     50m
      memory:  50Mi
    Environment Variables from:
      sas-certframe-config-2ch97fd95b                      ConfigMap  Optional: false
      sas-certframe-ingress-certificate-config-cmm2t44t88  ConfigMap  Optional: false
      sas-certframe-user-config-c4ch2c59m7                 ConfigMap  Optional: false
    Environment:
      KUBE_POD_NAME:                       sas-consul-server-0 (v1:metadata.name)
      SAS_CERTFRAME_TOKEN_DIR:             /certframe-token
      SAS_ADDITIONAL_CA_CERTIFICATES_DIR:  /customer-provided-ca-certificates
    Mounts:
      /certframe-token from certframe-token (rw)
      /customer-provided-ca-certificates from customer-provided-ca-certificates (rw)
      /security from security (rw)
  sas-certframe-client-token-generator:
    Image:      cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-certframe:3.20.7-20211015.1634318362435
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:     50m
      memory:  50Mi
    Environment:
      SAS_KEYS_SECRET_NAME:        sas-consul-client
      SAS_KEYS_KEY_NAMES:          CONSUL_HTTP_TOKEN
      SAS_SECURITY_ARTIFACTS_DIR:  /security
      SAS_CERTFRAME_TOKEN_DIR:     /certframe-token
    Mounts:
      /certframe-token from certframe-token (rw)
      /security from security (rw)
  sas-certframe-management-token-generator:
    Image:      cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-certframe:3.20.7-20211015.1634318362435
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:     50m
      memory:  50Mi
    Environment:
      SAS_KEYS_SECRET_NAME:        sas-consul-management
      SAS_KEYS_KEY_NAMES:          CONSUL_MANAGEMENT_TOKEN CONSUL_TOKENS_ENCRYPTION
      SAS_KEYS_KEY_TYPES:          uuid base64
      SAS_SECURITY_ARTIFACTS_DIR:  /security
      SAS_CERTFRAME_TOKEN_DIR:     /certframe-token
    Mounts:
      /certframe-token from certframe-token (rw)
      /security from security (rw)
Containers:
  sas-consul-server:
    Image:       cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-consul-server:1.310006.0-20211014.1634217840806
    Ports:       8300/TCP, 8301/TCP, 8301/UDP, 8500/TCP
    Host Ports:  0/TCP, 0/TCP, 0/UDP, 0/TCP
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:      250m
      memory:   150Mi
    Liveness:   exec [sh /opt/sas/viya/home/bin/consul-liveness-probe.sh] delay=45s timeout=1s period=30s #success=1 #failure=3
    Readiness:  exec [sh /opt/sas/viya/home/bin/consul-readiness-probe.sh] delay=45s timeout=1s period=30s #success=1 #failure=3
    Startup:    exec [sh /opt/sas/viya/home/bin/consul-startup-probe.sh] delay=45s timeout=1s period=30s #success=1 #failure=3
    Environment Variables from:
      sas-tls-config-f8ccd48c6m     ConfigMap  Optional: false
      sas-shared-config-9dh449kdkb  ConfigMap  Optional: false
      sas-consul-client             Secret     Optional: false
      sas-consul-management         Secret     Optional: false
      ingress-input-mfh55658f2      ConfigMap  Optional: false
    Environment:
      CONSUL_BOOTSTRAP_EXPECT:  3
      CONSUL_CLIENT_ADDRESS:    0.0.0.0
      CONSUL_DATACENTER_NAME:   viya
    Mounts:
      /consul/data from sas-viya-consul-data-volume (rw)
      /opt/sas/viya/config/etc/SASSecurityCertificateFramework/cacerts from security (rw,path="cacerts")
      /opt/sas/viya/config/etc/SASSecurityCertificateFramework/private from security (rw,path="private")
      /opt/sas/viya/config/etc/SASSecurityCertificateFramework/tokens/consul/default from tmp-volume (rw,path="consul-tokens")
      /opt/sas/viya/config/etc/consul.d from tmp-volume (rw,path="consul.d")
      /opt/sas/viya/config/etc/consul.d/default from sitedefault-vol (rw)
      /opt/sas/viya/config/tmp/sas-consul from tmp-volume (rw,path="sas-consul")
      /security from security (rw)
      /tmp from tmp-volume (rw,path="tmp")
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  sas-viya-consul-data-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  sas-viya-consul-data-volume-sas-consul-server-0
    ReadOnly:   false
  sitedefault-vol:
    Type:                Projected (a volume that contains injected data from multiple sources)
    ConfigMapName:       sas-consul-config-7m8mcgtm5c
    ConfigMapOptional:   <nil>
    SecretName:          sas-consul-config-6m98g47d77
    SecretOptionalName:  <nil>
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  certframe-token:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sas-certframe-token
    Optional:    false
  security:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  customer-provided-ca-certificates:
    Type:        ConfigMap (a volume populated by a ConfigMap)
    Name:        sas-customer-provided-ca-certificates-29kdmk686c
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
                 workload.sas.com/class=stateful:NoSchedule
                 workload.sas.com/class=stateless:NoSchedule
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  20s (x279 over 6h15m)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

Here come's the problem, I'm going to analyze the whole way creating my nfs storage class and provisioner

1 pod has unbound immediate PersistentVolumeClaims

 

 

gwootton
SAS Super FREQ

Assuming you have a separate NFS server you are using for this, I create my nfs storage class using this (replace nfs.example.com and /srv/share with the values for your NFS server share, and make sure the share has world writeable permissions):

kubectl create ns nfs-client
nfsserver=nfs.example.com nfsshare="/srv/share" nfstmp=$(mktemp -d) cd $nfstmp git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git cd nfs-subdir-external-provisioner/charts/nfs-subdir-external-provisioner/ helm install nfs-subdir-external-provisioner . --namespace nfs-client --set nfs.server=$nfsserver --set nfs.path=$nfsshare --set storageClass.accessModes=ReadWriteMany
cd rm -rf $nfstmp
--
Greg Wootton | Principal Systems Technical Support Engineer
N224
Obsidian | Level 7

Hey ! that answer was awesome, mostly "world wireatble allow".

I changed allow properties and test-pod started. My whole NFS script looks like (part of whole Viya4 SMD Deployment script) :

export STORAGEFOLDER="/home/saspodstorage" 
export NFSRULES="0.0.0.0/0(rw,sync,no_subtree_check,crossmnt,fsid=0)" 
export NFSNETWORK="10.0.110.0/24" 
export NFSSERVER="10.0.110.99" 

# ---- Install NFS Server --------------------
sudo apt -y install nfs-kernel-server
sudo cat /proc/fs/nfsd/versions
sudo mkdir -p /srv/nfs4/nfs-share
sudo mkdir -p $STORAGEFOLDER
sudo mount --bind $STORAGEFOLDER /srv/nfs4/nfs-share
sudo echo "$STORAGEFOLDER /srv/nfs4/nfs-share  none   bind   0   0" >> /etc/fstab
sudo mount -a
sudo ufw allow from $NFSNETWORK to any port nfs
sudo echo "/srv/nfs4/nfs-share         $NFSRULES" >> /etc/exports
sudo exportfs -ar
sudo exportfs -v
clear
# --------------------------------------------------------

# ---- InstallNFS Subdir  Subdir External Provisioner -
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=$NFSSERVER \
    --set nfs.path=/srv/nfs4/nfs-share \
	--set storageClass.defaultClass=true \
	--set storageClass.accessModes=ReadWriteMany

 

 

gwootton
SAS Super FREQ
Great! If your test pod succeeds your PVCs should be able to bind in the Viya deployment, so your consul servers should be able to start.
--
Greg Wootton | Principal Systems Technical Support Engineer
N224
Obsidian | Level 7
Deployin whole installation now 😄 so let's wait 🙂
N224
Obsidian | Level 7

Ok, finally I got rid of that problem, now I'm fighting with some other ones, I'm going to resolve it by myself, if not succeed .... I would appreciate communities help 😄

N224
Obsidian | Level 7

Finally to get rid of this specific problem :

Creation of standalone NFS server should look like :

 

# NFS :
export STORAGEFOLDER="/home/saspodstorage" 
export NFSRULES="*(rw,sync,no_subtree_check,crossmnt,fsid=0)" # This is most important - world access to share
export NFSNETWORK="10.0.110.0/24" # It firewall network subnet for other hosts
export NFSSERVER="10.0.110.99" # NFS Server IP

Instalation :

 

# ---- NFS Server --------------------
sudo apt -y install nfs-kernel-server
sudo cat /proc/fs/nfsd/versions
sudo mkdir -p /srv/nfs4/nfs-share
sudo mkdir -p $STORAGEFOLDER
sudo mount --bind $STORAGEFOLDER /srv/nfs4/nfs-share
sudo echo "$STORAGEFOLDER /srv/nfs4/nfs-share  none   bind   0   0" >> /etc/fstab
sudo mount -a
sudo ufw allow from $NFSNETWORK to any port nfs
sudo echo "/srv/nfs4/nfs-share         $NFSRULES" >> /etc/exports
sudo chmod 777 -R $STORAGEFOLDER
sudo exportfs -ar
sudo exportfs -v
sudo systemctl restart nfs-server # It's important to restart service
sleep 1m 
clear
# --------------------------------------------------------

Provisioner (default RBAC is enabled, I got it to work without security tuning just placing it in the same namespace as sasoperator)

# ---- NFS Subdir  Subdir External Provisioner -
kubectl create ns sasoperator
echo -e "Rozpoczynam\v instlacje\v NFS \v Subdir \v  Subdir \v External \v Provisioner"
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=$NFSSERVER \
    --set nfs.path=/srv/nfs4/nfs-share \
	--set storageClass.defaultClass=true \
	--set storageClass.accessModes=ReadWriteMany \
	--namespace sasoperator
# --------------------------------------------------------

After giving it some time

tee test-pod.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs-client
  resources:
    requests:
      storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF

kubectl apply -f test-pod.yaml -n sasoperator
sleep 1m
kubectl describe pod test-pod -n sasoperator

And now sas-consul-client secret is out 🙂

Now I'm facing problem which theoreticly could stop whole idea of single machine deployment - limits of 110 pods per node in k8s..

😉

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

Get Started with SAS Information Catalog in SAS Viya

SAS technical trainer Erin Winters shows you how to explore assets, create new data discovery agents, schedule data discovery agents, and much more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 14 replies
  • 4924 views
  • 4 likes
  • 3 in conversation