Executed the Kubernetes deployment for all components, and while some pods are successfully running, we are facing issues with others:
FYI : I have gone through the below link for the above error. but i am unable to find the file or path to fix it in LTS 2O23.10 release.
SAS VIYA 3.4 : https://communities.sas.com/t5/SAS-Viya/SAS-Viya-Installation/td-p/495208
2. Other pods are awaiting for SAS folders.
Could you please suggest on these issues?
Thank you for your prompt response.
Are both issue 1 and issue 2 related to the fact that the configured internal network for the cluster doesn't fall within the recommended private IP ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16)? If so, is there an alternative solution available? Additionally, the documentation doesn't seem to provide guidance on using these specified IP ranges.
Yes, most services are dependent on Consul, so if consul is not able to start they will not be able to enter a ready state.I'm not aware of any way to circumvent this requirement.
I found a possible solution would be to set the environment variable "CONSUL_BIND_EXTERNAL" to "eth0".
Does running this command allow the consul server to start?
kubectl -n namespace set env sts/sas-consul-server CONSUL_BIND_EXTERNAL=eth0
If so, this would also need to be done in the CAS configuration, so you could use these two patchTransformers in your deployment, then rebuild/apply:
apiVersion: builtin kind: PatchTransformer metadata: name: sas-consul-bind-transformer patch: |- - op: add path: /spec/template/spec/containers/0/env/- value: name: CONSUL_BIND_EXTERNAL value: eth0 target: group: apps kind: StatefulSet name: sas-consul-server version: v1 --- apiVersion: builtin kind: PatchTransformer metadata: name: sas-cas-consul-bind-transformer patch: |- - op: add path: /spec/controllerTemplate/spec/containers/2/env/- value: name: CONSUL_BIND_EXTERNAL value: eth0 target: group: viya.sas.com kind: CASDeployment name: .* version: v1alpha1
Thank you @gwootton ! Your assistance and advice are greatly valued.
Following the execution of the command below, the sas-consul
server has been initiated.
kubectl -n namespace set env sts/sas-consul-server CONSUL_BIND_EXTERNAL=eth0
And Included the two patchTransformers in the deployment under the site-config directory, following the structure mentioned below, and executed a rebuild.
└── $deploy/
├── kustomization.yaml
├── sas-bases/
└── site-config/
The issue with SAS folders persists even after these steps. Your insights on this matter would be greatly appreciated.
Here is the current status of the pods, primarily in a waiting state for SAS folders.
++
All of the pods are currently in the initialization state, and the sas-start-sequencer container has finished but is in a pending state within the sas-certframe container. The logs for the pod indicate that it is waiting for SAS folders.
For instance,
Thank You! @gwootton
"sas-folders" is currently awaiting the availability of "sas-logon."
The deployment status of both "sas-configuration" and "sas-logon" pods are in a pending state, and this delay is attributed to the "DB check sleeping" condition (see below). In pod logs, apart from the "DB check sleeping", not able to find the sufficient information to address the problem.
Could you please suggest on this?
FYI:
-> Validated the postgres dataserver details.
└── $deploy/
├── kustomization.yaml
├── sas-bases/
└── site-config/postgres
-> Performed connectivity test from GKE node to SQL database. (see below)
This suggests the database is not accessible by the sas-logon and sas-configuration pods using the provided connection information.
It sounds like you are using external postgres and maybe have not performed all the required steps for configuring external postgres. This is described in:
$deploy/sas-bases/examples/postgres/README.md under "External PostgreSQL Configuration". There are additional specific steps for Google Cloud Platform Cloud SQL for PostgreSQL if this is what you are using.
Hi @gwootton ,
I suspect there could be an issue with the connection to the database through the "cloud_sql_proxy." Upon inspecting the platform-postgres-sql-proxy pod, I noticed that despite the pod being active, but the logs indicate the following:
Hi @gwootton ,
Could you please suggest on below issues:
Note: Using Internal postgres.
1) sas-model-repository:
2)sas-search:
3)sas-cas-server-default-controller:
Note:
I've configured the environment variable "CONSUL_BIND_EXTERNAL" to "eth0" in an attempt to address the "No Private IPv4 address found" error. However, the issue persists.
The SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment.
SAS technical trainer Erin Winters shows you how to explore assets, create new data discovery agents, schedule data discovery agents, and much more.
Find more tutorials on the SAS Users YouTube channel.