Originally I had this value set to 4GB, however many users programs could not execute - so then i upped it to 22GB and several users programs still could not execute. As a test I upped it to 64GB, and all programs could execute, but this scares me as I don't think this value should be that high. The users do work with quite large datasets - but is there anything they can do on their end to limit the amount of memory their programs require? What would be the ideal MEMSIZE setting for a server like this - or does it totally depend on the size of the datasets users are working with?
Some procedures can be made less (much less) memory-consuming by sorting datasets and using by instead of class, as an example.
It all depends on what kind of analysis is done.
It is worth remembering that the default value of MEMSIZE at installation time as set by SAS is around 3 or 4GB. This value is a good compromise between allowing most SAS jobs to run without memory problems, while still leaving enough for other SAS jobs to run at the same time.
Greatly increasing this value for all SAS users is not recommended for the reasons explained by @JuanS_OCS. Small increases up to say 8GB are probably OK but that depends on the number of simultaneous SAS sessions compared to server physical memory.
I had a conversation with @ChrisNZ a while back where we discussed MEMSIZE and related tunables at length. Here is a snippet of that conversation where I shared some of my thoughts on the topic at hand.
With modern machines I'd say this (memory allocator limits and related tunables) is mattering less and less. When your computer had 32Kb of memory you'd invest in the software to make smart decisions with memory to get your problem solved. With so little memory in the machines of the time it was easy to have your working set of data be orders of magnitude larger than the available memory you could physically fit in a machine. On top of this the ISA for most CPUs only supported addressing up to a maximum of 4GB of virtual memory. When this options was added into SAS I'm sure they didn't even consider that one day we'd need to have a new set of instructions for CPUs and entirely new CPUs to address more than 4GB of memory. For a lot of computing problems now if you don't have enough memory you just buy more (or cluster machines i.e. the SAS LASR Analytics Server). This is also likely reflected in the SAS procedures too. I'm sure its only the older procedures that have the smarts to dynamically pick algorithms based on the actual amount of memory in the system.
With the Linux kernels control groups feature getting quite mature now this is the route I'd likely take to solve the issue of users taking all the memory on a box if the issue arose. The MEMSIZE tunable inside SAS is great but it has got some obvious short comings. It only track memory allocated by the SAS session's memory management subsystem. Memory allocated by third party software (like database client drivers) isn't accounted for. Another problem is that a user can simple spawn multiple processes and easily consume more memory than you expected them to take. With control groups you can set resource limits on all of the users processes as a whole. This includes memory allocated by their SAS processes, loaded third party libraries (e.g. database client libraries), even processes spawned by their SAS session which would get a whole new address space (eg. bulkloaders, X statements, etc).
For @gothaggis the simplest answer I'd give you is to set it high enough that users don't complain, re-evaluate the situation if you see the oom-killer invoked and the box is running out of memory.
The SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment.
Learn how to install the SAS Viya CLI and a few commands you may find useful in this video by SAS’ Darrell Barton.
Find more tutorials on the SAS Users YouTube channel.