BookmarkSubscribeRSS Feed
☑ This topic is solved. Need further help from the community? Please sign in and ask a new question.
EyalGonen
Lapis Lazuli | Level 10

Hi all,

 

Does SAS have a way to stop/start a SAS Viya 4 environment on AWS?

I found some paper referring to "sas-orchestrator" but could not find any doc on how to use it to start/stop.

Any idea?

 

Thanks

1 ACCEPTED SOLUTION

Accepted Solutions
4 REPLIES 4
EyalGonen
Lapis Lazuli | Level 10

Hi @BrunoMueller ,

 

Returning to this question after some time..

Using the SAS supplied sas-stop-all job stops all SAS pods but does not seem to stop the AWS pre-requisite pods and they remain running. For example, see output below of list of pods in the AWS cluster after I ran the sas-stop-all job:

 

[sas@65ffd157191b /]$ kubectl -n kube-system get pods -o wide

NAME                                                         READY   STATUS    RESTARTS   AGE    IP               NODE                                             NOMINATED NODE   READINESS GATES

aws-node-2nmtm                                               1/1     Running   0          106m   192.168.26.23    ip-192-168-26-23.il-central-1.compute.internal   <none>           <none>

aws-node-nm4kx                                               1/1     Running   0          107m   192.168.51.33    ip-192-168-51-33.il-central-1.compute.internal   <none>           <none>

cluster-autoscaler-aws-cluster-autoscaler-5d99f7f75c-k8dmh   1/1     Running   0          101m   192.168.47.236   ip-192-168-51-33.il-central-1.compute.internal   <none>           <none>

coredns-7fd98fbdf9-4pprb                                     1/1     Running   0          112m   192.168.12.32    ip-192-168-51-33.il-central-1.compute.internal   <none>           <none>

coredns-7fd98fbdf9-kk5wk                                     1/1     Running   0          112m   192.168.50.221   ip-192-168-51-33.il-central-1.compute.internal   <none>           <none>

ebs-csi-controller-67868fdb79-bnvm7                          5/5     Running   0          101m   192.168.20.78    ip-192-168-51-33.il-central-1.compute.internal   <none>           <none>

ebs-csi-controller-67868fdb79-dw2mh                          5/5     Running   0          101m   192.168.2.59     ip-192-168-26-23.il-central-1.compute.internal   <none>           <none>

ebs-csi-node-6gvvl                                           3/3     Running   0          101m   192.168.14.30    ip-192-168-26-23.il-central-1.compute.internal   <none>           <none>

ebs-csi-node-7zr8w                                           3/3     Running   0          101m   192.168.58.137   ip-192-168-51-33.il-central-1.compute.internal   <none>           <none>

kube-proxy-t9tgk                                             1/1     Running   0          106m   192.168.26.23    ip-192-168-26-23.il-central-1.compute.internal   <none>           <none>

kube-proxy-w25wm                                             1/1     Running   0          107m   192.168.51.33    ip-192-168-51-33.il-central-1.compute.internal   <none>           <none>

metrics-server-b975dc65f-s7kxt                               1/1     Running   0          100m   192.168.5.14     ip-192-168-51-33.il-central-1.compute.internal   <none>           <none>

I want to shut down all pods including the ones above to save AWS costs. The thing is that if I set the node pools to Desired Count = 0 this will close the node pools and shut down the pods but then I do not know how to start them back again.

 

There must be some documented way to shut down the AWS EKS cluster completely, at least I hope there is..

 

Any idea?

 

 

gwootton
SAS Super FREQ
The SAS job only stops the Viya software. Once this is done you'd need to use AWS functions to stop/start your cluster entirely, AWS should be able to help you with the proper way to do this.
--
Greg Wootton | Principal Systems Technical Support Engineer
EyalGonen
Lapis Lazuli | Level 10

Thanks @gwootton 

 

For the sake of anyone interested in this topic here is what I did.

To stop, after the running the "sas-stop-all" job, I modified the Desired Count of the remaining dynamic (autoscaled) node pools that were not shutdown yet to zero and thus forced a shutdown on the remaining running AWS pods.

 

When I wanted to start back again I changed again from zero to one and then these AWS pods started to execute back again and then I could SAS back again using the "sas-start-all" job and all seems to work fine.

sas-innovate-2024.png

Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.

 

Register now!

Discussion stats
  • 4 replies
  • 1473 views
  • 2 likes
  • 3 in conversation