BookmarkSubscribeRSS Feed
Denise
Obsidian | Level 7

I am in search of the best documentation I can find to support the need of separate LUNs when creating the best performance environment for SAS 9.4.

 

We have new Linux servers (VM in UAT and physical compute with VM meta server in PROD) and I need to bolster my case for multiple LUNs.  The technology team is telling me that separate LUNs are antiquated thinking and it doesn't really affect the performance.  I tried to express that this is not antiquated for SAS software needs and found one paper to support my statement.  (http://support.sas.com/resources/papers/proceedings16/8220-2016.pdf) 

 

Are there any other papers or documents that will help prove my case?

 

Thank you,

Denise

5 REPLIES 5
Kurt_Bremser
Super User

Set the minimum throughput for the storage subsystem in your service agreement. If it doesn'the meet the specification in real operations, step on their toes until it does.

If they think they know better, let 'em.

boemskats
Lapis Lazuli | Level 10

Hi Denise,

 

Storage subsystem architecture/design should be taken very seriously when it comes to SAS, and is an extremely important thing to get nailed while you're still at the project stage. You're absolutely right that you should at least ask for multiple LUNs, and it is important that your storage vendor/provider understands that SAS demands streaming throughput rather than I/O operations. If you have the option, I would also advise you to argue for direct-attached flash-based storage for your SASWORK, while you're still at the design stage. If you have an existing workload that you will be migrating to your new environment, I would advise you to monitor and measure/profile that workload, so that you can use your measurements to validate that the hardware provided for your new environment meets your needs before you start your migration.

 

In answer to your actual question, this paper by @MargaretC should be your reference. 

 

Nik

Denise
Obsidian | Level 7

Thank you for your reply and paper reference.  It has been helpful to me.

 

Denise

ronan
Lapis Lazuli | Level 10

From my experience with SAS 9.2TS2M3 on Linux Red Hat 5 (x64) with modern SANs EMC2, I can confirm setting up multiple LUNs notably improve the performance as regards throughput rate for SAS. We had to optimize the performance of a SASWORK, initially installed on a RAID5 dedicated array (5 X 10k, I guess) - everything physical. We decided to change for a SAN volume (filesystem) built on a RAID0 logical array striped accordingly across multiple LUNs. In addition to this, the class of each LUNs was specifically chosen as Very Fast (C1, mix of SSDs + 15k).

 

taken from my notes :

 

The SAN filesystem optimised with striping eq. RAID0 proved more performant :

 

between +25% and +50%                       faster on sequential Reads (780 Mb/s max)

between +10% and +100% (X2)            faster on Sequential Writes (1200 Mb/max)

 

If you install on RHEL 5 or 6, pay attention to enable Read-Ahead on the filesystem, this proved to be a strong bottleneck as regards Seq Reads throughput (red Hat has confirmed this was somewhat a bug).

 

Since, SAS I/O performance and behaviour has changed a lot between 9.2 & 9.4 (block boundaries alignment), I cannot confim this will be as much high with 9.4 on RHEL 7 and whether this would be still required.

ronan
Lapis Lazuli | Level 10

side note : the magic bullet was Read ahead sectors set to 16384 (16M) instead of a default value (128k) too low. However, this parameter was set not on the filesytem, but more precisely at the *Logical Volume* level.
There was a bug with Sequential Reads on RHEL 5&6 for filesytems specifically :

- built inside an LV

- stored on LUNs accessed with HBA fiber-channel

 

the Read transactions were restricted to a single core and slowed down due to an insufficient read-ahead value. After the parameter change, the throughput Seq Read was increased > 3X times ! This behaviour was not observed for filesystems physically stored "internally" in the machine and accessed via the I/O controller, even for fs defined with LV/PV ...

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

CLI in SAS Viya

Learn how to install the SAS Viya CLI and a few commands you may find useful in this video by SAS’ Darrell Barton.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 5 replies
  • 1401 views
  • 8 likes
  • 4 in conversation