Architecting, installing and maintaining your SAS environment

I/O Issues: read slower than write

Reply
Frequent Contributor
Posts: 92

I/O Issues: read slower than write

Hello Everyone,

The company that I work for recently installed SAS on a server.  Technical specifications are as follows:


Flex node IBM 8737 x 240

2 x 12 core intel processors

32GB RAM

IBM SAN Storwize v7000: RAID 5 on spindle disks and RAID 1 SSD mirror


Currently when I run SASIOTEST, it shows writes speeds of 400+mbs/sec and read is less than 100/mbs/sec???  Note I'm working with both my IT department and SAS to resolve this issue.  As of yet, nobody has come up with a solution.  I'm posting here on the forum in the hope that one of you have encountered a similar situation and were able to improve performance.  Any suggestions would be greatly appreciated.  Thank you.


Trusted Advisor
Posts: 3,212

Re: I/O Issues: read slower than write

Posted in reply to BillJones

what about the caching? The write-back could explain your effect on the difference between writing/reading. Leaving the question on speeding up reading. What is the speed Shared / not shared of then connection to the SAN?
Specific behavior of the SAN as it can have automatic tier/speed replacing. Once seen dramatic performance degradation by replacing the SAN to an more modern faster one. 

---->-- ja karman --<-----
Frequent Contributor
Posts: 106

Re: I/O Issues: read slower than write

Posted in reply to BillJones

Hi Bill,

What operating system are you working on?

I'd recommend you have a look at using fio to do your i/o testing. Spencer Hayes did a great paper on it here, but I recommend you start simple and run some very basic read-only and write-only tests first to see if it's an issue with something like the way write-caching is handled, or if it's actually an issue with read performance.

For the sake of argument, if you're using RHEL, install fio via yum and as per the paper, create two test specifications - one for write and one for read testing. The paper will also tell you how to run the tests. Your read test spec file could look something like this:

[interleave]

directory=/wherever/yourdisk/ismounted

direct=0

invalidate=1

blocksize=128k

rw=read

size=10G

For the write test you can use the same file, just change rw=read to rw=write. Spencer's paper will tell you how to interpret the output. I'd suggest trying direct=1 as well to bypass the OS cache (you're testing your storage after all), and matching the blocksize to your RAID stripe size to see if it makes a drastic difference.

On a personal note I think that you'll need far better storage throughput to get the most out of those CPUs. In my experience, with general usage, 400mb/sec throughput is barely enough to saturate a single core if you use this disk array as a saswork scratch location as well. I think you'll be lucky to see your CPU utilisation hit 10%.

Hope this helps.

Nik

Frequent Contributor
Posts: 92

Re: I/O Issues: read slower than write

Posted in reply to boemskats

Jaap/Nikola,

Thanks for your thoughts!

We'll investigate the caching.

Server OS is Windows Server 2012 R2.  I'll have our IT staff evaluate fio.

Thanks again,

Bill

Ask a Question
Discussion stats
  • 3 replies
  • 938 views
  • 2 likes
  • 3 in conversation