I have a new single server Windows SAS Studio server deployed in Azure which is meeting our write i/o requirements but our read i/o is nowhere close for either our Work or Data drives when using the sasiotest utility. Also worth noting that Resource Monitor is showing the disk is running at 98% Highest Active Time when testing both read and write..
The current setup is the following:
Compute SKU: Standard E16-4ds_v5: 4vCPU, Memory 128GB, Max uncached disk throughput: I/O/MBps: 25600/600
Disks setup with Windows storage pools
Temp/Work: 3 - P30(200 MB/s each) Caching None, Virtual Disk: Simple Stripe, interleave 64k, Columns 3, Volume formated 64k
Data: 6 - P40(250 MB/s each) Caching None: Simple Stripe Virtual Disk interleave 64k, Columns 3, Volume formated 64k
Attached are the SAS system settings.
Any suggestions or advice would be greatly appreciated.
Thank you
Jamie
The E16-4ds_v5 instance has a Maximum uncached disk IO throughput of 600 MB/sec.
This means your Data and Temp/Work file systems are constrained to a total of 600 MB/sec IO throughput to the external Premium storage you are using. This constraint overrides the total IO throughput that are associated with the disks you are using for the file systems.
You may want to see about using internal ephemeral storage for Temp/Work file system. The new Lsv3 instance types have very fast NVMe internal ephemeral storage.
Can you explain why when using the sasiotest.exe utility and running it individually for my Work and Data disks why I get 550+ MB/s for writes but 150 MB/s for reads?
Can you send me the exact sasiotest.exe that your are running?
sasiotest.exe S:\testfile1.dat -w -filesize 129G -pagesize 64K
sasiotest.exe S:\testfile1.dat -r -filesize 129G -pagesize 64K
How many times have you run the sasiotest.exe tests? Please note there is a bursting IO throughput that happens for a short period of time and then you get the normal speed. If the numbers you are quoting are after just one run, then the write may be seeing the bursting speed. Please run the tests several time is a row and average the times.
I would also get you to reach out to Azure to see what they say. WRITEs should not be this much faster than READs.
Average of 6 writes was 640 MB/s. Reads are still slow. Is it possible this a SAS setting parameter could address this or this strictly hardware at this point?
At this point it is strictly a hardware issue.
Does SAS have any best practices for Windows Storage Spaces for Storage Pools, Virtual Disks, and Volumes for SAS i.e. AllocationUnitSize, NumberofColumns, Interleave, and the Volume Allocation Unit Size?
We do not have what you are asking for for Windows systems. We generally take the default Windows settings.
Some additional things to check are:
1) "SAS I/O Test Utility: v2.0" could give slower estimates than what is actually possible and it was fixed in version 2.1 . run sasiotest without arguments to print the version.
2) S:\ might be a mapped drive using SMB. even when the drive is local , when net use is used to map it that can occasionally slow it down
3) Windows disk fair share could be on. this causes intermittent and unpredictable throttling. this is normally off but if they installed the remote desktop role then it can automatically be turned on.
4) 3rd party security (sometimes dueling antiviruses packages) can cause throughput issues as well.
If the above does not resolve your issue, please send me an email so I can have someone work with you directly. Margaret.Crevar@sas.com
The SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment.
SAS technical trainer Erin Winters shows you how to explore assets, create new data discovery agents, schedule data discovery agents, and much more.
Find more tutorials on the SAS Users YouTube channel.