BookmarkSubscribeRSS Feed
sathya66
Barite | Level 11

Hi All,

below code works fine but take more time to execute. Is there a way we can impove the perofrmance . 

problem is with dates below.

d.date_drd BETWEEN a.tr_pro_date AND a.tr_pro_date+(6*30.5)

format for the columns are date9. (16AUG2018).

 

test1 and test2 are Teradata databases.

proc sql; 
create table work.auths2
as select 
a.*
FROM auths AS a 
LEFT JOIN test1.provisions b
ON a.id = b.id
AND b.period = a.month

LEFT JOIN Test2.account AS d
ON a.id=d.id
AND d.date_drd BETWEEN a.tr_pro_date AND a.tr_pro_date+(6*30.5)

LEFT JOIN test1.provisions f
ON a.id = f.id
AND a.AYPR = f.year_period; 
quit; 

 

14 REPLIES 14
ballardw
Super User

Do you actually want 180 days or 6 months? They aren't the same thing.

 

What makes you think that specific bit of code is your performance issue?

 

How many records in each of these sets? How long is "more time"?

 

sathya66
Barite | Level 11
180 days or 6 months also fine.
We were running this same code in our old enviroemnt (9.3) and it was fine (run time ~30 mins ) .This is running now in 9.4 now but taking ~90 to120 mins.not sure whats wrong other jobs are running fine except this.
It has 10 million+ rocords but was okay in 9.3
Kurt_Bremser
Super User

Since you use two different patgs to the external DBMS, SAS cannot push processing there and needs to download all tables before doing all work locally.

You will probably be better off pushing dataset auths2 to the DBMS and do the processing there.

sathya66
Barite | Level 11
yeah I thought the same but We were running this same code since years. As I said above it was fine in 9.3.
Tom
Super User Tom
Super User

@sathya66 wrote:
yeah I thought the same but We were running this same code since years. As I said above it was fine in 9.3.

Check what else has changed.

Is SAS still running on the same server?   If a different server is it "further" from the data?  Is the speed of the server the same? The memory? The disk drives? Is the load on the server the same?

 

Are the libref's defined differently?  Was the old one using database specific engine? (ORACLE vs ODBC for example)

Have the defaults for database access changed (block size perhaps?).

sathya66
Barite | Level 11
Yes, everything has changed( server, speed and memory is higher than 9.3 ).
9.4 is more advanced than 9.3.
9.3 was on on-prime and 9.4 is on Cloud. 9.3 is a single machine install(meta,compute and mid is on the same server) but 9.4 is a multi install environment.
9.3 is a ODBC
9.4 is a SAS access.
Every job is working fine after migration except this.
I doubt on “between and “ condition and to do with date extraction.

Thanks.
SASKiwi
PROC Star

If your cloud SAS and Teradata data sources are now physically much further apart than your previous installation it could be that is the only thing causing the slowness. For example if Teradata and SAS were in the same data centre but now they are not and are hundreds of kilometres apart could easily explain the performance you are seeing. So you have other large data extracts and their performance is fine?

 

sathya66
Barite | Level 11
Yes, We have and their performance is fine but users are exucuting their code half in TD (ie; extract,summery,etc) and half in SAS (ie;first.V and final report ,etc)
andreas_lds
Jade | Level 19

@sathya66 wrote:
Yes, everything has changed( server, speed and memory is higher than 9.3 ).
9.4 is more advanced than 9.3.
9.3 was on on-prime and 9.4 is on Cloud. 9.3 is a single machine install(meta,compute and mid is on the same server) but 9.4 is a multi install environment.
9.3 is a ODBC
9.4 is a SAS access.
Every job is working fine after migration except this.
I doubt on “between and “ condition and to do with date extraction.

Thanks.

The problem could be cause by moving SAS into the cloud, if the db-server are still in their old location.

ChrisNZ
Tourmaline | Level 20

You can use   options sastrace = ',,,db' ;    to see what SQL is sent to the remote database.

You could compare with the SQL used in 9.3.

I suspect you'll see that no  where  clause is passed to Teradata.

 

The way the query is coded, everything must match table AUTHS2, so the entire Teradata data is brought back to the SAS server. I suspect the speed or throughput are not as good in the new environments as @SASKiwi said.

 

Ideally, you'd upload the table of IDs (and date probably) from AUTHS2 to Teradata to do the subsetting there as @Kurt_Bremser said.

 

If you cannot do this, another -more cumbersome- way is to make the IDs as part of the query. So you need to add  where ID in (10 million values to your code, supposing Teradata (and SAS) can deal with such a long query. If it does not, then you need to run successive queries with a limited number of IDs.

 

Considering you currently match the Teradata data against your local table, there is no other way: The data brought back and then it is matched locally. So you have to find a way to subset the data remotely. I described these 2 ways: upload the required IDs as a table, or upload them as code.

 

[Edit: I assume you want to match a small portion of the Teradata data. If you match most of it, there is little point uploading the IDs, and little you can do to optimise your code: It looks simple enough.]

 

sathya66
Barite | Level 11
I try to excute in TD
ChrisNZ
Tourmaline | Level 20

I took the time to explain to you what's happening and why, and how you can improve performance. What's the issue?

sathya66
Barite | Level 11
We are exuting in TD and it resolved the issue.
Thanks all.

SAS Innovate 2025: Register Now

Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

SAS Training: Just a Click Away

 Ready to level-up your skills? Choose your own adventure.

Browse our catalog!

Discussion stats
  • 14 replies
  • 1580 views
  • 7 likes
  • 7 in conversation