04-03-2018 02:39 AM
I am a bit confused regarding using the join or the join statements, we don't know which is more efficient, option 1 or option 2. We are using SAS EG, 6.1 and the data is quite huge in the tables, almost 6Million records.
We have used the below option 1 to fetch the required datsets using the sub-query but there another approach which others as suggesting to use the option 2 to use the joins.
As per them, option 1 splits the query as 'where' and 'OR' clause, and split of all the product ids,
like 'where product_id = 782656 or product_id =78555 or product_id = 55421268 and so on.....'
/* dataset having the required product_idd with price>100 */ proc sql; create table all_items as select distinct(product_id) from catalogue where price >100; quit; /*Option1:*/ proc sql; create table items as select product_id, item_id, price, description from catalogue where product_id in (select * from all_items); quit; /*Option2:*/ proc sql; create table items as select a.product_id, a.item_id, a.price, a.description from catalogue as a inner join all_items as b on a.product_id = b.product_id; quit;
However as per my understanding both queries are taking same time, does sas automatically optimises the proc sql queries and gives the same result and in same cpu and time usage?
Thanks for the help!
04-03-2018 04:23 AM
You can use the FEEDBACK to see how SAS normalizes the query.
You can use the undocumented option _method to see how SAS actually are treating the query.
From optimizing perspective, indexing can in some situations help.
You can set UBUFSIZE option to higher than the default 64K to make the likelihood of hash join to take place.
04-03-2018 07:21 AM
How many observations does all_items have? This will determine if there are methods available that will outperform your SQL considerably.
04-03-2018 07:59 AM
all_items have almost 64 million rows.
Can't be. A subset of 6 million rows (as stated in your original post) can't have 64 million rows.
04-03-2018 09:14 AM
Opps, may bad, all_items have almost 4 million rows.
Depending on the size of your index variable, you might be able to fit a format (or a hash table) based on this into memory, so you could solve your task with two sequential passes (one through all_items for the format, the other for the main dataset).
Alternatively, if your main dataset is already sorted along product_id you can do this:
data want; merge catalogue ( in=a keep=product_id price rename=(price=_price) where=(_price > 100) ) catalogue ; by product_id; if a; drop _price; run;
Physically, this would be a single pass through the data, as most (if not all) would be present in cache for the seciond read.
I always like to keep my datasets sorted along the most important key.