The real limit is imposed by your main system resources memory, cpu and IO. Whenever you run out of one your system is overtasked. When that happens depends not on LSF but on the usage characteristics of whatever is running on your system and how you tune things. You will have to measure and learn in order to determine ow many concurrent jobs your system can run.
I have learned that no two systems are alike in this respect. I am always pleased with how LSF allows one to adapt easily.
In our case we have a mix of batch jobs and workspace servers. Workspace server (for SAS EG, Studio, Enterprise Miner, ...) are the big unknowns. Some people go wild all day, others start a session, run a small query and go into meetings for the rest of the day without signing off. But also the batch jobs can range from steamrolling our servers to sitting idle for hours waiting for Teradata to complete some big in-database voodoo. So we needed months to figure out what happens and where the limits are. In fact we are now resting at a 1:6 ratio between #CPU and the number of job slots on each host in the grid.
So the best advice I can give is measure, learn, experiment and adjust. No system is the same.
Hope this helps,
-- Jan.
... View more