11-04-2014 04:59 AM
I would like to scale up the SAS-program in the attachment. The program itsself works o.k., but gives an out-of-memory error if I use it for a (very) large dataset. The critical steps are 2 hash-look-ups. My questions are:
1) Are there any obvious mistakes in the 2 look-ups which are likely to use too much memory if used for a very large dataset (I am new to hash's)?
(actual error message:
ERROR: Hash object added 212976 items when memory failure occurred.
FATAL: Insufficient memory to execute DATA step program. Aborted during the EXECUTION phase.
ERROR: The SAS System stopped processing this step because of insufficient memory.
2) My memory options are: memsize 4G, realmemsize 2G, sortsize 1G, msymtabmax 200M. Is there anything wrong about that (in hash-context)?
3) I could do this task with merging without hash. Could it be that merging is more "foregiving" then using a hash-object?
11-04-2014 05:50 AM
First you need to make sure that you really use memory in the (4) GB range. It can well be that you are constrained by user limits (imposed by the operating system) smaller than those of SAS.
But there will be a time when it is better to do the sort/merge dance. That scales until you run out of disk space, and more disk space is cheaper than RAM sticks.
11-04-2014 11:42 AM
It could be time-consuming to figure out what all of this code does.
I'd suggest that when you post a question, try to "boil down" your code to a small example that shows the problem, that other people can copy and run.
11-04-2014 02:45 PM
SAS has to store the full hash table in memory, so it's not surprising that a small version of the data sets runs when a larger version does not.
One way to cut down the memory needed is to cut down the size of BoM_ID. It is 200 characters long, when you only need a length of 6 (at least at a quick glance, that's all you need). That alone might be enough to do the trick. If that doesn't work, you might just have to up the memsize.
11-04-2014 02:51 PM
Your hased dataset is having two vars of $200 being as an index (400 bytes) adding some pointers (8 bytes) you see how big this is.
If you can find mult somewhere on dasd and looks at the size than add something for it as all must fit into memory.
With somewhat over 200k items that would be 80 Mb. This is not really big or there must be digits of counts missing in the post.
The other hash ps_bottom_short is more unclear as dataset size. You can look at that on dasd.
The whole programmed construction seeing is giving some feeling of optimization in an other way could be better, no clue on what you are trying to do.
review the realmemsize option. I do not understand the setting of this in you environment. SAS(R) 9.2 Companion for UNIX Environments
Merging (datastep dow point) is designed in the era of machines not having the memory sizes of today. It is quite forgiving on memory usage as it will move a lot in the IO approach. Caching IO improves IO-time (decrease) to a certain limit being reached. At the other side of that limit your process will dramatically slow down in speed.