Why my Temp Array unduplication utterly fails vs HASH?
Objective : To get unique account_numbers
Source data: 33 Million records
/*Runs forever, not completing at all :( */
data w;
do until(z);
set aqr.rp_portfolio_&min-aqr.rp_portfolio_&max
(keep=account_number portfolio cra_flag period id_investor)
end=z;
where period>= 1439 and id_investor not in ('014', '015', '091', '092') or
period<1439;
array t(9999999) _temporary_;
length port_name $30;
if account_number in t then continue;
select;
when (portfolio='Mortgage') port_name='Treasury';
when (portfolio='Resi' and cra_flag='Y') port_name='CRA';
when (portfolio in('Resi','HELOAN Low/No') and cra_flag='N') port_name='Resi (No CRA)';
when (portfolio='Specialty') port_name=portfolio;
otherwise port_name=' ';
end;
if port_name>' ' then do;
n+1;
t(n)=account_number;
output;
end;
end;
stop;
keep account_number port_name;
run;
What's wrong with me or my eyes that isn't spotting something very obvious in the previous, whereas the HASH equivalent below runs in real time 6:49.53
user cpu time 1:26.36
system cpu time 24.94 seconds
memory 33221.57k
OS Memory 58464.00k
data w;
dcl hash H () ;
h.definekey ("account_number") ;
h.definedata ("account_number") ;
h.definedone () ;
do until(z);
set aqr.rp_portfolio_&min-aqr.rp_portfolio_&max
(keep=account_number portfolio cra_flag period id_investor)
end=z;
where period>= 1439 and id_investor not in ('014', '015', '091', '092') or
period<1439;
length port_name $30;
if h.check()=0 then continue;
select;
when (portfolio='Mortgage') port_name='Treasury';
when (portfolio='Resi' and cra_flag='Y') port_name='CRA';
when (portfolio in('Resi','HELOAN Low/No') and cra_flag='N') port_name='Resi (No CRA)';
when (portfolio='Specialty') port_name=portfolio;
otherwise port_name=' ';
end;
if port_name>' ' then do;
h.add();
output;
end;
end;
stop;
keep account_number port_name;
run;
Any thoughts?
OK, here's an approach that might be feasible, might not. The issues .....
It treats account number as numeric. While you can easily work around that, the cost for that is not clear.
It utilizes the direct array lookup that @mkeintz mentioned.
It needs a lot more memory. Whether you can get that much is not clear ... on the order of 400 times what you are using now for the temporary array. The array expands to 11 digits, since the largest account number you illustrated is 11 digits long. The memory requirements might be shrinkable, since it appears that you don't really need a length of $30 for PORT_NAME.
I took out the DO loop, since I didn't see a good purpose for keeping it. Perhaps its purpose is to maintain a top-to-bottom structure for the DATA step logic.
If all this is acceptable, it should be lightning fast at loading the values, and should take a bit more time to unload them.
data w;
array t {99999999999} $ 30 _temporary_;
if z then do account_number = 1 to 99999999999;
if t{account_number} > ' ' then do;
port_name = t{account_number};
output;
end;
end;
set aqr.rp_portfolio_&min-aqr.rp_portfolio_&max
(keep=account_number portfolio cra_flag period id_investor)
end=z;
where period>= 1439 and id_investor not in ('014', '015', '091', '092') or
period<1439;
if t{account_number} > ' ' then return;
select;
when (portfolio='Mortgage') t{account_number}='Treasury';
when (portfolio='Resi' and cra_flag='Y') t{account_number}='CRA';
when (portfolio in('Resi','HELOAN Low/No') and cra_flag='N')
t{account_number}='Resi (No CRA)';
when (portfolio='Specialty') t{account_number}='Specialty';
otherwise;
end;
keep account_number port_name;
run;
You could have more than 10 million unique accounts.
Also, the search in the array is sequential, while the hash find uses a b-tree or something similar performant.
Sir "You could have more than 10 million unique accounts" -This is funny though, we wish we were that big to have a huge portfloio. However, for a regional bank, our portfolio size isn't too bad either. 🙂 Nonetheless, that did make me think more. I am taking 82 months data min-max(range) as you would have noticed in the SET statement. The HASH successful output has given me 190,000 which is right.
Hmm, so am i to believe and learn ARRAY IN operator search is useless?? Oh gosh!
Are your account number integers in the range {-constant('bigint'):constant('bigint')? Probably not, since I think you would likely use direct array lookup in such a case.
But are the account numbers mappable in a 1 to 1 correspondence to such a range? If so, then instead of searching an array you could do direct lookup of the transformed account number.
Could you tell us the structure of your account numbers?
Looks something like this
1829647
7329386
7339930
9515503
10005882
10039832
10053965
10065076
10068369
10073393
5777
70396902245
And all are stored as numeric.
> am i to believe and learn ARRAY IN operator search is useless?
As @Kurt_Bremser said, "the search in the array is sequential, while the hash find uses a b-tree".
For each new account (there are 200,000), you are sequentially searching through 10 million array values.
That's 2e5 * 1e7 = 2e12 searches. Two thousand billion searches. 2,000,000,000,000 sequential searches.
Plus other searches for the other 33,000,000 records where a key is found earlier without scanning the whole array.
How do you expect this to be fast?
And even if magnitude wasn't an issue (it is), how could a sequential search be compared to an indexed hash search?
So useless? Absolutely not. The in operator for an array has its place. But tools are only good when used within their specifications. A F1 car is useless off-road and a buggy is useless on the tarmac. Likewise, expecting to search an array the way you are trying can only fail.
That's very disappointing to know it's sequential.
Does that mean WHICHC,WHICHN all will also be useless too? Do these also do follow suit (i.e sequential search)?
@novinosrin Don't blame the tool. All tools have specifications and limitations. The key is to know these and use the tools in an optimal manner.
You can also suggest improvements, but that's an other matter (and doesn't preclude knowing the specifications and limitations).
Which begs the question: Why create a 10-million-values array to store 200,000 values?
Agree.
@novinosrin wrote:
That's very disappointing to know it's sequential.
Does that mean WHICHC,WHICHN all will also be
uselesstoo? Do these also do follow suit (i.e sequential search)?
It can't be anything but sequential. While the hash builds a search tree for the key variable(s), there is no such thing for an array.
And the array has no way of "knowing" how much elements it has, so the "in" in the first data step iteration already goes through all 10 million elements. Reducing the array definition to the needed minimum will speed up things.
But nothing will beat the hash here.
Sir @Astounding You nearly gave me a heart attack . Luckily I checked the doc again before I would have jumped out of the building which the ARRAY IN is making me contemplate that. 🙂
specifies actions SAS is to take when it reads variables or observations into the program data vector for processing.
Tip | Data set options that apply to a data set list apply to all of the data sets in the list. |
See | For more information, see Definition of Data Set Options in SAS Data Set Options: Reference for a list of the data set options to use with input data sets. |
I guess your tables are not sorted or indexed by account number?
SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!
Learn how use the CAT functions in SAS to join values from multiple variables into a single value.
Find more tutorials on the SAS Users YouTube channel.
Ready to level-up your skills? Choose your own adventure.