Thanks Uttam for the post. We are testing with CAS and loading sas7bdat files in with both proc cas and casutil into CAS. The system we are using is a 5 node CAS system (1 controller, 4 workers). Our CAS_DISK_CACHE is very fast, we are using an EMC D5 and gpfs on D5 for our shared file system (input). We too see the copies=0 performance boost. What actually happens is that copies=1 (default) forces the file to be read in to RAM (CAS in-memory), but then its copied down to CAS_DISK_CACHE (only as fast as your storage can write), and then CAS replicates the rows across the network (another potentially slow part). Our system is really hurting with copies=1 as we only have 1 GbE between workers. 10 GbE would help us a bit. We do see a 25% increase in runtime if we don't set maxtablemem LARGER than the file size. We also can easily watch iostat and see massive IO to the CAS_DISK_CACHE when maxtablemem is smaller than the file size. This is because sas CAS is making a copy down on disk. This is NOT optimal all the time, especially with VERY large files and when you only want to keep them in memory temporarily. Also, many customers don't have FAST CAS_DISK_CACHE. If you need to write to CAS_DISK_CACHE, you better have fast disks (SSD) or stripped RAID storage or you might find that maxTableMem really does matter. You might get lucky and see decent performance for CAS_DISK_CACHE with single disk for the cache.. but only with a single user.... wait until you start thrashing the poor CAS_CACHE_DISK as multiple instances try to use CACHE. Just wanted to say that your mileage may vary and that setting maxTableMem may be really important for performance, especially if you are treating CAS like SASWORK and don't want to waste disk space and have lots of RAM for CAS.
... View more