Welcome back, SAS Agentic AI explorers! Today, we’re going forward with Agentic AI workflows in SAS Intelligent Decisioning—focusing on the build. We'll detail how you can integrate Large Language Models (LLMs), deterministic models, code files, and custom rules into a governed, modular SAS Agentic AI workflow.
Here’s a very quick overview of agentic AI workflows with SAS Viya:
SAS Agentic AI workflows give you a governed platform where LLMs, traditional machine learning, rule sets, and human review steps all play together. The Call LLM node is a key piece—it lets you connect any LLM via API. All you need is the container’s URL and a payload with prompts and options. SAS Agentic AI Accelerator standardizes the payload.
Flexibility is built-in: swap models or endpoints, run experiments with Prompt Builder, and publish workflows to containers or push them to production. SAS tracks each step with visual diagrams and versioning, so you always know which logic drove the decision.
Here’s the typical lifecycle for your Agentic AI workflow:
Let’s focus on the build with an example.
Suppose your goal is to support credit officers and make their job easier when assessing client credit requests. Their main pain point? They perform steps across multiple systems and lose time personalizing rejection emails. Templates exist, but personalization is still semi-automated. They’re considering LLMs—but messages must remain compliant and tone-appropriate.
That’s where you come in. You’re setting up a workflow that automates communication with human review checkpoints. Using SAS Intelligent Decisioning, you can build an Agentic AI workflow:
Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.
Start with their trusted, governed, versioned model to determine credit approval or rejection based on verified inputs.
Next, branch the logic: one for approval messages, one for rejection. Each branch uses different LLMs and prompts. In this context, prompts act as templates that drive the message structure, the tone.
On the rejection branch, the credit team uses an open-source model hosted in their infrastructure.
The Call LLM node is key—it connects any LLM via API. Just provide the container’s URL llmURL and a payload llmBody with prompts and options.
With SAS, your LLM prompts can come from:
Models: The result of Prompt Builder experiments using the LLM Portal Builder. You version your best prompt and manifest it as a portable model, ideal for governed, repeatable experiments.
Prompt Builder streamlines prompt development from experimentation to deployment. With project organization, experiment tracking, and integration, teams can confidently develop, test, and operationalize LLM prompts in their business processes. We will discuss the Prompt Builder in a future post.
Code File: A Python script that defines your prompt inline. Quick and direct, great for prototyping. You can swap it out later for a full model.
Either way, connect them to the Call LLM node, which handles the API call to your deployed LLM endpoint. See deploy the LLM to a Private Azure Container Instance, or deploy the LLM to a Kubernetes pod, the choice is in your hands.
Flexibility is built-in: swap models or endpoints, run experiments, and push workflows to production. SAS tracks each step with visual diagrams and versioning, so you always know which logic drove the decision.
Would you blindly trust the LLM output? No. Especially when the outcome is sensitive or could damage the client relationship.
You could use a SAS sentiment analysis model to evaluate the LLM-generated message and make the user’s job easier.
Depending on the detected sentiment, add rule sets to determine whether human review is needed, or if the message can be sent as-is.
Before going live, score records and review LLM outputs to ensure they meet business requirements. This gives you full control over quality and output, no surprises in production.
SAS Intelligent Decisioning provides visual diagrams that show how records flow through your workflow. You’ll see:
It’s transparency and governance, built in.
SAS is well-positioned to be a significant player in the Agentic AI space. The SAS Viya platform is already trusted by a wide range of companies and users, providing a solid foundation to build upon. In my view, our greatest potential lies in developing workflows specifically tailored to address distinct customer business problems and solving them exceptionally well. While generative AI models are widely accessible, SAS’s value is in being model agnostic, integrating these models within our robust, trusted platform. This allows customers to leverage cutting-edge AI in their existing environments, alongside traditional Machine Learning models developed over the years.
If you liked this guide, give it a thumbs up!
Thanks to David Weik and Xin Ru Lee for sharing their time and resources.
SAS offers a full workshop with step-by-step exercises for deploying and scoring models using Agentic AI and SAS Viya on Azure.
Access it on learn.sas.com in the SAS Decisioning Learning Subscription. This workshop environment provides step-by-step guidance and a bookable environment for creating agentic AI workflows.
For further guidance, reach out for assistance.
Find more articles from SAS Global Enablement and Learning here.
Good news: We've extended SAS Hackathon registration until Sept. 12, so you still have time to be part of our biggest event yet – our five-year anniversary!
The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.