BookmarkSubscribeRSS Feed

Until RHEL is supported on AWS EC2 I3 Instances

Started ‎04-28-2017 by
Modified ‎05-11-2017 by
Views 4,755

On April 22, 2017, SAS learned that NO version of RHEL is supported on the AWS EC2 I3 instance.  This has to do with the lack of drivers for the NVMe SSD drives and the new Elastic Network Adapters.  Please note Red Hat is aggressively working on resolving this issue, but in the meantime SAS strongly encourages SAS 9.4 customers to use the EC2 I2 instance.

 

We understand that these are older instances, but they are still available from AWS via this previous generation instances link.  

 

For maximum IO throughput, we strongly recommend that you use the i2.8xlarge instance for your SAS compute node(s).  This instance has 16 physical cores (32 vCPUs), 244 GB of RAM, and 8 – 800GB internal SSD drives (these drives are ephemeral and all data on them will be erased during a reboot/restart of your instance) and a single 10 GB NIC card.  Amazon configured several storage scenarios and gathered IO throughput statistics on them.  Please see Appendix A for the details.

 

We suggest that you stripe the 8 – 800GB internal SSD drives to get the best IO throughput for your SAS WORK and UTILLOC file systems.  Please note with striped SSD drives, you can leave UTILLOC in its default location, which is SAS WORK. 

 

We also suggest that the RHEL administrator reads the Optimizing SAS on RHEL paper and apply the tuning recommendations in the paper.

 

Permanent SAS data, SAS binaries, SAS configuration files, home directories, etc. should be housed on EBS storage for your single SAS compute server instance or in Lustre for your SAS Grid infrastructures (see this paper for how to configure SAS Grid on the Amazon cloud). 

 

As a reminder, it is advisable for all instances used by SAS (metadata tiers, mid-tier, compute tier, and external database tier) to be located in the same Amazon Region and Availability Zone.

 

Appendix A

 

To help with getting the best IO throughput, Amazon set up four different storage configuration with an i2.8xlarge instance type running RHEL-7.3_HVM_GA-20161026-x86_64-1-Hourly2-GP2 (ami-b63769a1) and got the following IO throughput numbers using the rhel_iotest.sh test from SAS:

 

Configuration 1:

  • (4) 7965 GiB ST1 EBS Volumes with expected throughput of 312MiB/s per volume in a RAID 0 set named md0. 
  • (8) ephemeral SSD’s configured as RAID 0 set named md1. 

 

Ephemeral SSD (local)

  • read throughput rate:   232.25 megabytes/second per physical core
  • write throughput rate:  82.94 megabytes/second per physical core

 

EBS

  • read throughput rate:   71.00 megabytes/second per physical core
  • write throughput rate:  70.07 megabytes/second per physical core

 

Configuration 2:

  • (3) 12800GiB ST1 EBS Volumes, pushing the expected throughput of 500MiB/s per volume, in a RAID0 set named raidvol

 

EBS

  • read throughput rate:   72.06 megabytes/second per physical core
  • write throughput rate:  70.37 megabytes/second per physical core

 

Configuration 3:          

  • (8) ephemeral SSD’s configured as RAID 0 names raidvol

 

Ephemeral SSD (local)

  • read throughput rate:   230.44 megabytes/second per physical core
  • write throughput rate:  86.22 megabytes/second per physical core

 

Configuration 4:

  • (4) ephemeral SSD’s configured as RAID 0
  • (8) 2TB st1 EBS volumes

 

Ephemeral SSD (local)

  • read throughput rate:   136.48 megabytes/second per physical core
  • write throughput rate:  96.45 megabytes/second per physical core

 

EBS

  • read throughput rate:   72.14 megabytes/second per physical core
  • write throughput rate:  70.71 megabytes/second per physical core

 

Comments

Very valuable insight @MargaretC. Thank you very much!

Great content, extremely useful!

I have been informed by Amazon that we should not have an issue with AWS EC2 instances, so you should ignore this statement "You will need to do a one year reservation of your I2 instance so that you will be assured of having one if you need to restart/reboot your instance.  Since these are previous generation instances, there are not a lot of them in the pool and because of this, you may be downsized to a smaller I2 instance after a reboot/restart." from above.

 

Per @MargaretC's comment, I edited the article to remove the statement about Amazon instance availability "limitation"

is it safe to assume in the Apendix that the ST1 drives were always configured in RAID0?

 

Very interesting that the size of the ST1 drives didn't seem to alter the per core throughput performance.

Yes, that is a safe assumption.

One more question. Have you considered using LVM (With striping) vs using RAID 0?  We currently use LVM (with striping) for the 8 Instance Drives on our i2.8xlarge, and I was wondering if there are any advantages to RAID 0 using a tool like mdadm.

I ran rhel_iotest.sh on an i3.16xlarge.

 

8 Ephemeral Drives Striped using LVM as /saswork

 

RESULTS
-------
INVOCATION: rhel_iotest -t /saswork

TARGET DETAILS
directory: /saswork
df -k: /dev/mapper/saswork-root 14841653248 33840 14841619408 1% /saswork
mount point: /dev/mapper/saswork-root on /saswork type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=4096,noquota)
filesize: 390.08 gigabytes

STATISTICS
read throughput rate: 384.15 megabytes/second per physical core
write throughput rate: 138.46 megabytes/second per physical core
-----------------------------


********* ALL ERRORS & WARNINGS *********
<<WARNING>> insufficient free space in [/saswork] for FULL test.
<<WARNING>> - smaller stack size and # of blocks will be used.
*****************************************

 

 

**Note: Not sure why there is an Error message about space, the /saswork drive started the test with 14TB of space free. Perhaps the amount of freespace overflowed some sort of integer value in the script.

 

----------------------------------------------------------------------------------------------

 

3 X 12.5TB Drives in LVM Stripe for /users

 

RESULTS
-------
INVOCATION: rhel_iotest -t /users

TARGET DETAILS
directory: /users
df -k: /dev/mapper/users-root 40263219200 34032 40263185168 1% /users
mount point: /dev/mapper/users-root on /users type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=1536,noquota)
filesize: 480.09 gigabytes

STATISTICS
read throughput rate: 39.68 megabytes/second per physical core
write throughput rate: 49.73 megabytes/second per physical core

Support for AWS i3s is now available with RHEL 7.4 (https://access.redhat.com/articles/3135091)

 

David

 

Version history
Last update:
‎05-11-2017 07:39 AM
Updated by:

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags