I'm using the master table as the one with the 126 million rows. Below is the hash code that generated that memory error. Again, running on server. Checking with server admin to see if increasing memsize would be a viable solution without adversely affecting anything. Other than that, at a loss ... data mastertest; if _n_=1 then do; declare hash e(dataset: 'master'); e.definekey('key'); e.definedata('datetime', 'detailid', 'datasetid'); e.definedone(); call missing(datetime, detailid, datasetid); end; set lookup (obs=15); drop rc; rc=e.find(); if rc=0 then key=catx(' ',datetime, detailid, datasetid); else key='** Not Found'; run;
... View more