BookmarkSubscribeRSS Feed
mnjtrana
Pyrite | Level 9

hi guys,

 

I am a bit confused regarding using the join or the join statements, we don't know which is more efficient, option 1 or option 2. We are using SAS EG, 6.1 and the data is quite huge in the tables, almost 6Million records.

 

We have used the below option 1 to fetch the required datsets using the sub-query but there another approach which others as suggesting to use the option 2 to use the joins.

 

As per them, option 1 splits the query as 'where' and 'OR' clause,  and split of all the product ids,

like 'where product_id = 782656 or product_id =78555 or product_id = 55421268 and so on.....'

 

 

/* dataset having the required product_idd with price>100 */
proc sql;
create table all_items as select distinct(product_id) from catalogue where price >100;
quit;

/*Option1:*/
proc sql;
create table items as 
select product_id, item_id, price, description from catalogue where product_id in (select * from all_items);
quit;


/*Option2:*/
proc sql;
create table items as 
select a.product_id, a.item_id, a.price, a.description from catalogue as a
inner join all_items as b
on  a.product_id =  b.product_id;
quit;

 

However as per my understanding both queries are taking same time, does sas automatically optimises the proc sql queries and gives the same result and in same cpu and time usage?

 

Thanks for the help!

 


Cheers from India!

Manjeet
7 REPLIES 7
LinusH
Tourmaline | Level 20

You can use the FEEDBACK to see how SAS normalizes the query.

You can use the undocumented option _method to see how SAS actually are treating the query.

From optimizing perspective, indexing can in some situations help.

You can set UBUFSIZE option to higher than the default 64K to make the likelihood of hash join to take place. 

Data never sleeps
Kurt_Bremser
Super User

How many observations does all_items have? This will determine if there are methods available that will outperform your SQL considerably.

mnjtrana
Pyrite | Level 9

all_items have almost 64 million rows.


Cheers from India!

Manjeet
mnjtrana
Pyrite | Level 9
Opps, may bad, all_items have almost 4 million rows.

Cheers from India!

Manjeet
Kurt_Bremser
Super User

@mnjtrana wrote:
Opps, may bad, all_items have almost 4 million rows.

Ok.

Depending on the size of your index variable, you might be able to fit a format (or a hash table) based on this into memory, so you could solve your task with two sequential passes (one through all_items for the format, the other for the main dataset).

Alternatively, if your main dataset is already sorted along product_id you can do this:

data want;
merge
  catalogue (
    in=a
    keep=product_id price
    rename=(price=_price)
    where=(_price > 100)
  )
  catalogue
;
by product_id;
if a;
drop _price;
run;

Physically, this would be a single pass through the data, as most (if not all) would be present in cache for the seciond read.

 

I always like to keep my datasets sorted along the most important key.

Ksharp
Super User

JOIN is more efficient,

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

Click image to register for webinarClick image to register for webinar

Classroom Training Available!

Select SAS Training centers are offering in-person courses. View upcoming courses for:

View all other training opportunities.

Discussion stats
  • 7 replies
  • 940 views
  • 2 likes
  • 4 in conversation