BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
HejHarald
Fluorite | Level 6

I'm building some new SAS servers on a Power8.

And I'd like to get i right, for best performance.

 

The Compute server, has a /saswork mountpoint, for temporary data.

 

I have followed the tuning guide, and then they mention this:

 

For SAS WORK area file systems which store only temporary data, consider turning off logging to improve performance.

# mount -o log=NULL /mountpoint

 

 

Does any of you, have any expirience with this?

Does it make a huge difference in performance?

Other sugestions?

 

My SAS Compute server - so far (SAS not installed):

OSlevel 7200-01-01-1642

7 cores - SMT 8

56 GB RAM

 

Each filesystem, has it's own Volume Group, with seperate disk for JFS2LOG:

Filesystem         GB blocks      Free %Used    Iused %Iused Mounted on

/dev/saslv            249.88       249.84    1%        4     1% /sas
/dev/sasworklv    249.88       249.84    1%        4     1% /saswork
/dev/sasutillv       199.88       199.84    1%        4     1% /sasutil
/dev/sasdatalv    1497.50     1497.27    1%        4     1% /sasdata
/dev/spdsdatalv  1048.25     1048.09    1%        4     1% /spdsdata

 

 mounted        mounted over     vfs       date             options      
 ---------------  ---------------       ------ ------------       ---------------

/dev/saslv            /sas                jfs2   Mar 03 08:33 rw,rbw,rbr,noatime,log=/dev/sas_loglv
/dev/sasworklv    /saswork        jfs2   Mar 03 08:33 rw,rbw,rbr,noatime,log=/dev/saswork_loglv
/dev/sasutillv       /sasutil           jfs2   Mar 03 08:33 rw,rbw,rbr,noatime,log=/dev/sasutil_loglv
/dev/sasdatalv      /sasdata         jfs2   Mar 03 08:33 rw,rbw,rbr,noatime,log=/dev/sasdata_loglv
/dev/spdsdatalv    /spdsdata       jfs2   Mar 03 08:33 rw,rbw,rbr,noatime,log=/dev/spdsdata_loglv

 

SAS has it's own HBA's.

 

These tuning settings are also implemented:

 

ioo -p -o j2_dynamicBufferPreallocation=256 -o j2_maxPageReadAhead=2048 -o j2_minPageReadAhead=16 -o j2_nPagesPerWriteBehindCluster=64
vmo -o nokilluid=10
no -o tcp_nodelayack=1
chdev -l sys0 -a maxuproc=2048
lvmo -v rootvg -o pv_pbuf_count=2048
apply "lvmo -v %1 -o pv_pbuf_count=1024" $(lsvg | grep -v root)

maxfree = 15176

minfree = 960

 

Thankyou in advance 🙂

 

Chears

Harald

 

1 ACCEPTED SOLUTION

Accepted Solutions
Kurt_Bremser
Super User

Simply test it. Logging in journaling file systems is designed to prevent data loss in case of a system crash / unexpected hard shutdown / power loss. Since SAS work files would be lost anyway by "definition" in such a case, your only penalty might be a longer fsck when the system recovers, or a possible rebuild of the WORK file system from scratch if the damage is too much.

 

So create a real burner of a SAS program that makes heavy use of the WORK, run it once with standard settings, activate the mount -o log=NULL /mountpoint, restart the system and rerun the program (both runs with fullstimer).

View solution in original post

19 REPLIES 19
Kurt_Bremser
Super User

Simply test it. Logging in journaling file systems is designed to prevent data loss in case of a system crash / unexpected hard shutdown / power loss. Since SAS work files would be lost anyway by "definition" in such a case, your only penalty might be a longer fsck when the system recovers, or a possible rebuild of the WORK file system from scratch if the damage is too much.

 

So create a real burner of a SAS program that makes heavy use of the WORK, run it once with standard settings, activate the mount -o log=NULL /mountpoint, restart the system and rerun the program (both runs with fullstimer).

HejHarald
Fluorite | Level 6

Hi Kurt,

 

Thank you for your response.

OK - I'll wait for SAS to install their software, and I will run the tests with them.

If it should crash and it's only temprary data, maybe it's faster to remove the filesystem, and recreate it, just with subfolders.

 

Best Regards

Harald

Kurt_Bremser
Super User

It has been my experience that AIX (our DWH runs on AIX since 2000, started with AIX 4.3.2) is very reliable and hardened against outages.

We only lost a filesystem once, and that was due to a not-very-well-tested, bleeding-edge storage virtualization software (not from IBM) that scrambled our mirrors.

But I have to admit that I keep jfs2 logging on, as I have devised a multi-disk work storage layout that allows users to create semi-permanent data that persists from SAS session to SAS session, but has a limited "shelf life". So I want those filesystems to survive an outage without data loss.

HejHarald
Fluorite | Level 6

I see your point, and I will disguss this with our SAS users.

Though, I will still want to make the test, just to see the difference.

Then it's up to the SAS users, what to choose.

 

Thanks

Harald

HejHarald
Fluorite | Level 6

Hello Margaret,

 

I'm not in a position to have any expences with this matter.

But if one of your AIX experts, free of charge, could look through my post, I would be happy.

 

We are building a 14 core SAS system on Power 8 and AIX 7.2. And I'd like to have MAX performance 🙂

 

Cheers,

Harald

Kurt_Bremser
Super User

Completely forgot what @MargaretC mentioned: the tuning guides and whitepapers that SAS publishes with help from the system manufacturers are first-class, and one can see throughout that the suggested settings are the result of thorough testing.

I gleaned a lot of useful information from them when setting up our server. Without that, I wouldn't be able to run it with considerably aged HW as it is now. Will of course consult the lastest papers when setting up the next incarnation.

HejHarald
Fluorite | Level 6

Hello Margaret,

 

I have dual VIO servers, only for SAS.

Each has a 16Gbs HBA, but it can only run 8Gbs, due to Storage capability.

 

Storage is SAN devices, connected through SVC:

hdisk100 Available C7-T1-01 MPIO FC 2145

 

devices.sddpcm.71.rte      2.6.6.0  COMMITTED  IBM SDD PCM for AIX V71

Can't go much higher, due to another old system connected to SVC.

 

The LUNs are ordered as Tier 1 on an IBM V7000 SAN storage.

 

Regarding SAS WORK in memory - No, we won't do that.

 

Cheers

Harald

 

 

 

 

MargaretC
SAS Employee

Here is what my AIX expert said about your storage:

 

"well, I'm fine with him using SVC, under the right conditions. We generally don't consider V7000 as Tier1 disk either, as it's our mid-range storage offering.

I'm not seeing any disk striping here. He's only showing 1 hdisk on the host side, and No, that is not good. We need to surface multiple LUNs for EACH filesystem at the AIX level. It's never advised to even use only 1 hdisk. Also, with 7 cores in Compute, we're going to say we need to drive around 1 GBps of throughput. A single 16Gb HBA running at 8 Gb will never get them there. We tend to use 750MBps as the clip point.


I'm not sure why they're using SDD-PCM for multi-pathing though. Generally, AIX MPIO is used, it's been the default for AIX for many years. . Maybe SVC is driving them there, but I've done SVC using MPIO, I believe. "

 

Please review this BP for IO Configure for SAS paper so that you will understand how SAS does IO and what needs to be in place to get MAX performance.  http://support.sas.com/resources/papers/proceedings16/SAS6761-2016.pdf  

 

I think we need to take this off the public chat and have a meeting with my IBM expert and yourself so we can discuss this via a conference call.  

 

Cheers,

Margaret

HejHarald
Fluorite | Level 6

Arghhh - Sorry, my fault.

 

No, I'm using several LUNs for my filesystems.

mkvg -S -s 128 -y sas_vg hdisk100 hdisk101
mkvg -S -s 128 -y saswork_vg hdisk200 hdisk201
mkvg -S -s 128 -y sasutil_vg hdisk300 hdisk301
mkvg -S -s 256 -y sasdata_vg hdisk400 hdisk401 hdisk402 hdisk403 hdisk404 hdisk405 hdisk406 hdisk407 hdisk408 hdisk409 hdisk410
mkvg -S -s 256 -y spdsdata_vg hdisk500 hdisk501 hdisk502 hdisk503 hdisk504 hdisk505 hdisk506 hdisk507

 

JFS2LOG is placed on hdisk[1-5]00

Filesystems on *01 disks and up.

 

We use SDDPCM to use all paths on the adapter.

 

A conference call would be fine, thanks.

 

Cheers

Harald

HejHarald
Fluorite | Level 6
Hi Margaret,
If we could please have that conference call tomorrow?
My timezone is GMT+2.
I have worked late today, so I'm going home now.
Thankyou,
Harald
MargaretC
SAS Employee
It will take longer than one day to set up this call.
What is a direct email I can reach you at? What is your company name and country location?
Margaret
HejHarald
Fluorite | Level 6
Hello Margaret,

That's OK - I'll wait for you to set up the Conference call.

I'm working at Region Hovedstaden / The Capital Region of Denmark.
And I'm the System Administrator on AIX.

The new SAS system is set up in cooperation with Region Midt.
Some time, this system will serve all 5 Regions of Denmark.

Below is my contact information.

PS. Sorry for my outburst (the Arghh). It was only ment for myself 🙂 ( I pressed Post, before thinking )

With kind Regards / Med venlig hilsen

Harald Genderskov Jacobsen
Unix administrator

Mobil: +45 20 59 78 81
Mail: hgj@regionh.dk

Region Hovedstaden
Center for It, Medico og Telefoni
Serverdrift, Backup & Storage
Borgervænget 7, 3.
2100 København Ø
Denmark

Tlf.: +45 3864 8000
Web: www.regionh.dk/imt


Kurt_Bremser
Super User

@HejHarald wrote:


PS. Sorry for my outburst (the Arghh). It was only ment for myself 🙂 ( I pressed Post, before thinking )





Don't bother. After all, this is a IT community, and we all have those moments where we just need to vent before exploding 😉

HejHarald
Fluorite | Level 6
Hello Kurt,
Thank you!
Cheers Harald

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

CLI in SAS Viya

Learn how to install the SAS Viya CLI and a few commands you may find useful in this video by SAS’ Darrell Barton.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 19 replies
  • 2982 views
  • 9 likes
  • 3 in conversation