BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
pcur
Fluorite | Level 6

I am running a fairly large simulation study. There are multiple proc's involved, including bootstrapping, randomized sampling, proc nlp, and proc genmod, as well as a series of data steps in between, some of which generate fairly large datasteps. 

 

Given the above, and since my base dataset is approx 48 GB, I installed a new 4TB drive to accomodate memory usage. This appears to be more than sufficient as running my whole simulation and assorted data/procs I never use more than 250 GB. (this 4TB drive is otherwise empty, excepting the 500GB allocated for the OS and other programs/data - so really 3.5TB available).

 

I nonetheless am getting this error at a predictable point - i.e., always on BY step XXX - in the midst of PROC NLP. It reads:

 

ERROR: File Work.'SASTMP-0001108070'n.UTILITY is damaged. I/O processing did not complete. 

The SAS System stopped proccesing this step because of errors.

 

To address this apparent memory issue I've added the memsize MAX option to the SASv9.cfg shortcut. I've also used a user-specified work directory (e.g. libname xx 'c:\xx'; user=xx). I also usually run the simulation with a proc printto that directs my log to a dummy file.

 

Despite the above measures, and apparently amble available resources (again, literally terabytes of memory are still available on this single-user windows PC), I am getting this seemingly memory-related error message. 

 

Any insights or suggestions would be much appreciated. 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions
pcur
Fluorite | Level 6

Hi Zeke, 

 

Thanks for the input. I did arrive at a solution after contacting SAS tech support. 

 

The problem related to an I/O mismatch originating in the usage of proc nlp with huge datasets. I was able to correct this by specifying the -SGIO option in the command line when launching SAS. This is the scatter-read / gather-write method which provides improved throughput in accessing the available cache. 

 

I hope that is helpful to others that may encounter similar issues, though I suspect this will be specific to the usage of proc nlp with large (~50 GB) datasets. To clarify, this was occuring in a desktop Windows environment w/SAS 9.4, was unrelated to RAM usage or available memory (I have 16 GB, no more than 8-10 used when the proc failed), and was also unrelated to log output (as I indicated was re-directed to a dummy file via proc printto, a step I suggest for any doing large, multi-step simulations that will quickly choke your log). 

View solution in original post

2 REPLIES 2
zekeT_sasaholic
Quartz | Level 8

this seems like an older post.

i wonder if its been resolved.

i just found this today and from a high level i suspect an actual IO issue to RAM.

there is some information but its limited.

information that would be useful is:  Ram size, Disk size, OS, and does your BIOS allow all that extra space to be recognized?

Next: sas version and its configuration.  you do mention some configs you did.  But you dont mention if you have maybe partitioned the extra TB of drive space and maybe sent 'sas work' to that partition.  

 

Anyway - there also isnt any indication if this is all your laptop/desktop is doing.  Any other process might rob you of ram/disk space.

And you did mention the specification of log/list files - but i would actually be concerned if the log is rampant with extra/verbose messages that just chew up too much space.  consider maybe sending that to a flash drive or external drive so as to save space for the 'work'.  And finally - its sounding like a pretty brutal punishment for a laptop - i really hope that your not beating that pc too bad.

 

and if you are running on a vm  eeeesh.

 

best

zeke torres

www.wcsug.com

pcur
Fluorite | Level 6

Hi Zeke, 

 

Thanks for the input. I did arrive at a solution after contacting SAS tech support. 

 

The problem related to an I/O mismatch originating in the usage of proc nlp with huge datasets. I was able to correct this by specifying the -SGIO option in the command line when launching SAS. This is the scatter-read / gather-write method which provides improved throughput in accessing the available cache. 

 

I hope that is helpful to others that may encounter similar issues, though I suspect this will be specific to the usage of proc nlp with large (~50 GB) datasets. To clarify, this was occuring in a desktop Windows environment w/SAS 9.4, was unrelated to RAM usage or available memory (I have 16 GB, no more than 8-10 used when the proc failed), and was also unrelated to log output (as I indicated was re-directed to a dummy file via proc printto, a step I suggest for any doing large, multi-step simulations that will quickly choke your log). 

sas-innovate-2024.png

Don't miss out on SAS Innovate - Register now for the FREE Livestream!

Can't make it to Vegas? No problem! Watch our general sessions LIVE or on-demand starting April 17th. Hear from SAS execs, best-selling author Adam Grant, Hot Ones host Sean Evans, top tech journalist Kara Swisher, AI expert Cassie Kozyrkov, and the mind-blowing dance crew iLuminate! Plus, get access to over 20 breakout sessions.

 

Register now!

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

Click image to register for webinarClick image to register for webinar

Classroom Training Available!

Select SAS Training centers are offering in-person courses. View upcoming courses for:

View all other training opportunities.

Discussion stats
  • 2 replies
  • 6494 views
  • 1 like
  • 2 in conversation