Hi Bill,
What operating system are you working on?
I'd recommend you have a look at using fio to do your i/o testing. Spencer Hayes did a great paper on it here, but I recommend you start simple and run some very basic read-only and write-only tests first to see if it's an issue with something like the way write-caching is handled, or if it's actually an issue with read performance.
For the sake of argument, if you're using RHEL, install fio via yum and as per the paper, create two test specifications - one for write and one for read testing. The paper will also tell you how to run the tests. Your read test spec file could look something like this:
[interleave]
directory=/wherever/yourdisk/ismounted
direct=0
invalidate=1
blocksize=128k
rw=read
size=10G
For the write test you can use the same file, just change rw=read to rw=write. Spencer's paper will tell you how to interpret the output. I'd suggest trying direct=1 as well to bypass the OS cache (you're testing your storage after all), and matching the blocksize to your RAID stripe size to see if it makes a drastic difference.
On a personal note I think that you'll need far better storage throughput to get the most out of those CPUs. In my experience, with general usage, 400mb/sec throughput is barely enough to saturate a single core if you use this disk array as a saswork scratch location as well. I think you'll be lucky to see your CPU utilisation hit 10%.
Hope this helps.
Nik