Using Custom Task Types in SAS Customer Intelligence 360 allows administrators and power users the capability to configure new task types for use by marketing users, enabling custom integration with seamless experience for end users. Custom Task Types come in two flavors, supporting triggered and bulk execution.
We have looked at using them for triggered use cases in the past (Tutorial: Configuring SAS Customer Intelligence 360 Channel Integrations with Custom Task Types, Bojan Belovic):
https://communities.sas.com/t5/SAS-Customer-Intelligence/Tutorial-Configuring-SAS-Customer-Intellige...
Today, we will walk through the process of configuring a bulk integration with AWS S3. The post is aimed at more technical resources and administrators who will be responsible for connector and custom task type configuration but will be useful to any user of SAS CI360. Bulk connectors generally involve some code, and deployment of integration functions that will accept the generated data files and process and deliver them to the ultimate destination.
While this post talks about integration between SAS CI360 and AWS S3, the general principles will apply to any bulk integration within CI360, as the high-level steps performed will be similar regardless of the destination platform. Integration function (or connector function) will be deployed to AWS infrastructure in this example, primarily using AWS Lambda for all the processing, and API Gateway as means of exposing the Lambda function behind an HTTPS endpoint.
Bulk Execution Flow for Custom Task Types
When a new custom task type runs in bulk execution, the overall sequence is as follows:
- Data is assembled by the task based on selected data source (audience, on-premises data source or uploaded cloud data), specified targeting criteria and output data attributes.
- Data file in CSV format is staged in CI360 environment and made publicly available through a pre-signed URL which is valid for a limited amount of time
- An API call is made to a webhook endpoint, providing all the data necessary for the data file to be picked up and processed
When custom task type is executed, audience file is constructed and handed off to the bulk connector. Bulk connector will stage the audience CSV file, as well as the metadata file, and generate pre-signed URLs for these two files (this is similar to how UDM data is downloaded from CI360).
After the files are generated and staged, a webhook call is made to predefined API endpoint, containing all the information needed to process the data generated by CI360 custom task.
The webhook payload includes:
- Two presigned URLs – one for data file, and another for metadata file which contains the file layout
- Some general task information, like task name, task ID, response tracking code etc.
- Any send parameters configured as part of custom task type
Here is an example of a webhook call:
{
"task": {
"occurrenceId": "bc70ea13-1539-4448-ac29-d152da52b2a3",
"taskId": "8eb42f45-7991-48ae-a30a-8e033a88668d",
"taskName": "AWS S3 Upload",
"taskVersionId": "zTRWmPpTMIwahUbzoaLhlCeUOum984mk",
"outboundEventName": "matchDataUploaded"
},
"tenant": {
"externalTenantId": "0dh01d000a00013a3a2cac00",
"tenantMoniker": "dmofin1"
},
"contactResponse": {
"responseTrackingCode": "55e09dc2-1303-4be2-aa0f-3d9877154a27"
},
"presignedUrls": {
"metadataFile": "https://ci-360-datahub-transfers-demo-us-east-1.s3.amazonaws.com/exports/1019/2e8d504f-b291-46d6-9169-f5c21810560e/bc70ea13-1539-4448-ac29-d152da52b2a3_metadata?X-Amz-Security-Token=xxxxxx",
"dataFile": "https://ci-360-datahub-transfers-demo-us-east-1.s3.amazonaws.com/exports/1019/2e8d504f-b291-46d6-9169-f5c21810560e/bc70ea13-1539-4448-ac29-d152da52b2a3_data?X-Amz-Security-Token=xxxxxxxx"
},
"sendParameters": {
"s3_filename ": "my_audience.csv"
}
}
Integration Function Process and Code
General sequence for processing the staged output file:
- Download the data file to temporary storage
- (optional) process the file if necessary
- Upload to S3 bucket using AWS library functions and parameters received through webhook (e.g. S3 object name)
This diagram illustrates the high-level process:

Let’s look at these steps in a little more detail.
We will first get the URLs for metadata file and data file from “presignedUrls” object passed in from the webhook.
"presignedUrls": {
"metadataFile": "https://ci-360-datahub-transfers-demo-us-east-1.s3.amazonaws.com/exports/1019/2e8d504f-b291-46d6-9169-f5c21810560e/bc70ea13-1539-4448-ac29-d152da52b2a3_metadata?X-Amz-Security-Token=xxxxxx",
"dataFile": "https://ci-360-datahub-transfers-demo-us-east-1.s3.amazonaws.com/exports/1019/2e8d504f-b291-46d6-9169-f5c21810560e/bc70ea13-1539-4448-ac29-d152da52b2a3_data?X-Amz-Security-Token=xxxxxxxx"
}
Another data item we need is the final destination filename in the S3 bucket, which we’ll also get from the webhook, and specifically “sendParameters” object. Send parameters are defined as part of custom task type, and will contain any information needed to complete the execution of integration process, that is not part of customer data file. Examples of send parameters, besides S3 filename as in this case, could be campaign codes or email template IDs for destination email platform, originating phone number for SMS provider integration, or audience ID for an advertising platform.
"sendParameters": {
"s3_filename ": "my_audience.csv"
}
We will download the data file to Lambda ephemeral storage, mounted as /tmp, and save it with the filename matching our desired final filename in S3 bucket.
def download_file(url, local_filename):
with http.request('GET', url, preload_content=False, decode_content=False) as response:
if response.status == 200:
with open(local_filename, 'wb') as file:
for chunk in response.stream(8192):
file.write(chunk)
print("Download complete. File saved as", local_filename)
else:
print("ERROR: Unable to download file. Status Code:", response.status)
return response.status
By default, AWS Lambda functions are assigned 512MB of ephemeral storage, but this can be increased to as much as 10GB if needed.
Once we have the data file downloaded to /tmp, the last step in this simple integration is to copy the file to an S3 bucket.
This is the part of the code that will be most different between various connectors. While we are using AWS functions and services to move the file to an S3 bucket for this connector, in other applications, this may be an upload of a file to an HTTP endpoint or SFTP host, or possibly processing of CSV file to build individual or batched API calls to a service.
Since we are creating this integration function as an AWS Lambda, we can simply use AWS libraries to copy the file from /tmp to a designated S3 bucket.
def upload_to_s3(local_file, s3_file):
try:
s3.upload_file(local_file, s3_bucket_name, s3_file)
url = s3.generate_presigned_url(ClientMethod='get_object', Params={'Bucket': s3_bucket_name, 'Key': s3_file}, ExpiresIn=24 * 3600)
print("Upload Successful:", url)
return url
except FileNotFoundError:
print("ERROR: The file was not found")
return None
S3 bucket name will be stored as Lambda configuration variable called "s3_bucket_name", and we’ll initialize it as part of the function startup:
s3_bucket_name = os.environ["s3_bucket_name"]
S3 client and HTTP connector pool are also initialized on function startup:
import urllib3
import boto3
http = urllib3.PoolManager()
s3 = boto3.client('s3')
For Lambda function to be able to access S3 bucket and copy the file, we must grant proper IAM permissions to the Lambda function, that is, the role it will execute under. We will assign AmazonS3FullAccess permission to Lambda role.
I won’t go into further details on how to create and deploy a Lambda function, but all the steps are the same as when creating and deploying a Lambda for a triggered connector, and are described in detail in a previous post (Tutorial: Building a SAS Customer Intelligence 360 Connector Function, Bojan Belovic):
https://communities.sas.com/t5/SAS-Customer-Intelligence/Tutorial-Building-a-SAS-Customer-Intelligen...
Once Lambda is deployed and API created, the provided URL used to invoke the Lambda will be used as Endpoint URL for the bulk connector webhook endpoint.
Published Connectors and Examples
Finally, you can find the complete code for this connector on our GitHub, together with few other bulk integrations:
AWS S3: https://github.com/sassoftware/ci360-extensions/tree/main/code/ci360-s3-bulk-connector
Braze: https://github.com/sassoftware/ci360-extensions/tree/main/code/ci360-braze-bulk-user-import-connecto...
MailChimp: https://github.com/sassoftware/ci360-extensions/tree/main/code/ci360-mailchimp-bulk-connector
Braze and MailChimp connectors are good examples of different flavors of processing CI360 output files. Braze connector relies on Braze created processing functions being deployed in AWS and invoked after our main code fetches and stages CI360 files. MailChimp connector, on the other hand, relies on API based integration where CSV output file is processed row-by-row and sent to MailChimp via their public API methods.