Hi folks,
We are changing our SAS server from Linux to Solaris. We had 9.4M6. The disk are exactly the same and same configuration.
If we run a process on our current server (Linux), the results using fullstimer option was:
NOTE: DATA statement used (Total process time):
real time 3:25.94
user cpu time 34.03 seconds
system cpu time 1:23.48
memory 14941.40k
OS Memory 46500.00k
Timestamp 09/28/2020 05:10:40 PM
Step Count 11 Switch Count 375
Page Faults 3
Page Reclaims 23244
Page Swaps 0
Voluntary Context Switches 105903
Involuntary Context Switches 335
Block Input Operations 121081552
Block Output Operations 123929424
The same process executed on the new server was:
NOTE: DATA statement used (Total process time):
real time 26:46.93
user cpu time 17:45.94
system cpu time 8:01.01
memory 70107.87k
OS Memory 116408.00k
Timestamp 09/28/2020 05:35:16 PM
Step Count 3 Switch Count 2456
Page Faults 0
Page Reclaims 78
Page Swaps 4
Voluntary Context Switches 46739
Involuntary Context Switches 8595
Block Input Operations 4
Block Output Operations 0
The process is a data step, appending (using set statement) multiple SAS tables. The final table as 330 millions of records and less than 30 columns.
The SAS options, under performance group, are the same for both servers.
Any idea why the performance on new server is so bad?
Regards,
Can you post all the NOTE statements from the new system? I am wondering if you are running in to CEDA issues.
Hi @MargaretC ,
Follows the complete SAS Notes:
NOTE: Variable YYYYYY is uninitialized.
WARNING: Multiple lengths were specified for the variable YYYYY by input data set(s). This can cause truncation of data.
WARNING: Multiple lengths were specified for the variable YYYY by input data set(s). This can cause truncation of data.
WARNING: Multiple lengths were specified for the variable YYYYY by input data set(s). This can cause truncation of
data.
WARNING: Multiple lengths were specified for the variable YYYYYY by input data set(s). This can cause truncation of data.
NOTE: There were 5820616 observations read from the data set XXXX.
NOTE: There were 4569248 observations read from the data set XXXX.
NOTE: There were 4509959 observations read from the data set XXXX.
NOTE: There were 4518362 observations read from the data set XXXX.
NOTE: There were 4546455 observations read from the data set XXXX.
NOTE: There were 4423525 observations read from the data set XXXX.
NOTE: There were 4254636 observations read from the data set XXXX.
NOTE: There were 4359365 observations read from the data set XXXX.
NOTE: There were 4353605 observations read from the data set XXXX.
NOTE: There were 4554579 observations read from the data set XXXX.
NOTE: There were 4267877 observations read from the data set XXXX.
NOTE: There were 4262071 observations read from the data set XXXX.
NOTE: There were 4694086 observations read from the data set XXXX.
NOTE: There were 4789203 observations read from the data set XXXX.
NOTE: There were 4072292 observations read from the data set XXXX.
NOTE: There were 4567131 observations read from the data set XXXX.
NOTE: There were 4538530 observations read from the data set XXXX.
NOTE: There were 4475507 observations read from the data set XXXX.
NOTE: There were 4411171 observations read from the data set XXXX.
NOTE: There were 4359381 observations read from the data set XXXX.
NOTE: There were 4324500 observations read from the data set XXXX.
NOTE: There were 4228892 observations read from the data set XXXX.
NOTE: There were 4190545 observations read from the data set XXXX.
NOTE: There were 4185068 observations read from the data set XXXX.
NOTE: There were 4167472 observations read from the data set XXXX.
NOTE: There were 4141983 observations read from the data set XXXX.
NOTE: There were 4041356 observations read from the data set XXXX.
NOTE: There were 3920099 observations read from the data set XXXX.
NOTE: There were 3788828 observations read from the data set XXXX.
NOTE: There were 3636657 observations read from the data set XXXX.
NOTE: There were 3536831 observations read from the data set XXXX.
NOTE: There were 3340947 observations read from the data set XXXX.
NOTE: There were 3106852 observations read from the data set XXXX.
NOTE: There were 4505524 observations read from the data set XXXX.
NOTE: There were 4490082 observations read from the data set XXXX.
NOTE: There were 4776326 observations read from the data set XXXX.
NOTE: There were 4687402 observations read from the data set XXXX.
NOTE: There were 4704937 observations read from the data set XXXX.
NOTE: There were 4699015 observations read from the data set XXXX.
NOTE: There were 4667477 observations read from the data set XXXX.
NOTE: There were 4628155 observations read from the data set XXXX.
NOTE: There were 4534338 observations read from the data set XXXX.
NOTE: There were 4547107 observations read from the data set XXXX.
NOTE: There were 4528159 observations read from the data set XXXX.
NOTE: There were 4537420 observations read from the data set XXXX.
NOTE: There were 4384072 observations read from the data set XXXX.
NOTE: There were 4256290 observations read from the data set XXXX.
NOTE: There were 4301934 observations read from the data set XXXX.
NOTE: There were 4262408 observations read from the data set XXXX.
NOTE: There were 4229829 observations read from the data set XXXX.
NOTE: There were 4103216 observations read from the data set XXXX.
NOTE: There were 3573311 observations read from the data set XXXX.
NOTE: There were 3378687 observations read from the data set XXXX.
NOTE: There were 3208889 observations read from the data set XXXX.
NOTE: There were 3027670 observations read from the data set XXXX.
NOTE: There were 4159768 observations read from the data set XXXX.
NOTE: There were 4110686 observations read from the data set XXXX.
NOTE: There were 4052798 observations read from the data set XXXX.
NOTE: There were 4100190 observations read from the data set XXXX.
NOTE: There were 4101430 observations read from the data set XXXX.
NOTE: There were 4096104 observations read from the data set XXXX.
NOTE: There were 4155598 observations read from the data set XXXX.
NOTE: There were 4150791 observations read from the data set XXXX.
NOTE: There were 4148467 observations read from the data set XXXX.
NOTE: There were 4133380 observations read from the data set XXXX.
NOTE: There were 4125579 observations read from the data set XXXX.
NOTE: There were 4106291 observations read from the data set XXXX.
NOTE: There were 4103442 observations read from the data set XXXX.
NOTE: There were 4100431 observations read from the data set XXXX.
NOTE: There were 4090390 observations read from the data set XXXX.
NOTE: There were 4081835 observations read from the data set XXXX.
NOTE: There were 4065041 observations read from the data set XXXX.
NOTE: There were 4086124 observations read from the data set XXXX.
NOTE: There were 4079646 observations read from the data set XXXX.
NOTE: There were 4072297 observations read from the data set XXXX.
NOTE: There were 4032759 observations read from the data set XXXX.
NOTE: There were 4017684 observations read from the data set XXXX.
NOTE: There were 4026204 observations read from the data set XXXX.
NOTE: The data set WORK.FINAL has 329186812 observations and 25 variables.
NOTE: DATA statement used (Total process time):
real time 26:46.93
user cpu time 17:45.94
system cpu time 8:01.01
memory 70107.87k
OS Memory 116408.00k
Timestamp 09/28/2020 05:35:16 PM
Step Count 3 Switch Count 2456
Page Faults 0
Page Reclaims 78
Page Swaps 4
Voluntary Context Switches 46739
Involuntary Context Switches 8595
Block Input Operations 4
Block Output Operations 0
The same occurs when we create a table from an external database (for example SQL Server). The same query takes 3 to 4 times. Same happens with proc sort, etc.
Regards,
What computer model are you using for your Solaris system? I am concerned that the User CPU time for the Linux system is only 34.03 seconds, but for the Solaris system it is 17:45.94 (1,066 seconds). This implies a much slower computer that you are running SAS on.
And, just to confirm, you see now NOTEs in the SAS log talking about Cross Environment Data Access being used?
Hi @MargaretC ,
We already convert all SAS tables to Solaris, so we don't use CEDA.
Our Solaris server is SPARC M6.
Regards,
Is Intel(R) Xeon(R) CPU E5-2690 v2.
Regards,
Could it be that you run this on a virtualized server that gets only a little slice of the real hardware? Compared to your Intel system, that SPARC seems to be outright pathetic, which I can't believe.
You're right. On Solaris we use virtualized server, one physical core are translated on 8 threads. So, if I understand correctly, for the SAS process that use one thread only, we are using 1/8 of physical core. My understanding is correct?
Regards,
@MariaD - What do the SAS Environment Manager dashboards show? At the very least these will tell you if your SAS App server is CPU or memory constrained. If your server is maxing 100% CPU or 100% physical memory for significant periods then your server is constrained for these resources.
Alternatively any third-party tool monitoring server performance should have similar dashboards.
How many cores did you get on the Intel system?
PS and for how many cores/CPUs is your SAS licensed?
The SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment.
SAS technical trainer Erin Winters shows you how to explore assets, create new data discovery agents, schedule data discovery agents, and much more.
Find more tutorials on the SAS Users YouTube channel.