11-21-2017 04:11 PM
I am trying to upload a large table from SAS to Amazon RedShift using the bulkload option(http://documentation.sas.com/?docsetId=acreldb&docsetTarget=n1getgqem5whhmn1bxx7ct75ns82.htm&docsetV...).
It requires the BL_Bucket option (http://documentation.sas.com/?docsetId=acreldb&docsetTarget=n1n1jcqd49bk0kn1r66oe3o3dt47.htm&docsetV...)
I have the bucket name, access and secret keys, but I don't know how they should be entered into the data set option. Could someone post an example of proc append with the bl_bucket option?
11-21-2017 04:22 PM
Here is how I expect it to work. I note you need to be using SAS 9.4M4 for this option to be available.
proc datasets library = REDSHIFT; append base = BaseTable data = AppendTable (bl_bucket = MyBucketName) ; run; quit;
11-22-2017 09:24 AM
Thanks for your reply.
The data set option should go on the base table since that is the destination.
The bulkload=yes option should be used as well.
Here is what I tried and it is not working. Gives me an error saying 'Unknown region'. There doesn't seem to be a way to supply region.
proc append base=rds_lib.base_table( bulkload=yes BL_BUCKET=bucket_name)
11-26-2017 04:17 PM - edited 11-27-2017 03:33 PM
The bulkload options need to be on source table (data=) as this is the one you want to load in bulk.
...I stay corrected: The bulkload option needs to be on the target table.
11-22-2017 10:25 AM
I found some documentation on it. There are additional options that should be supplied. The example is here: https://communities.sas.com/t5/SAS-Communities-Library/Redshift-and-SAS/ta-p/361738