BookmarkSubscribeRSS Feed

SAS Agentic AI Accelerator – Register and Publish Models

Started ‎06-16-2025 by
Modified a month ago by
Views 698

The SAS Agentic AI Accelerator is designed to help businesses integrate Generative AI into their workflows efficiently. By enabling the registration of models, including proprietary and on-premises LLMs, it allows you to wrap these models in code. These code-wrapped models can later be deployed and used seamlessly in SAS Intelligent Decisioning, SAS Studio, and other applications, ensuring flexibility and governance. This post, the first in a series, explores how registration and publishing set the foundation for scalable agentic AI workflows.

 

 

Overview

 

The SAS Agentic AI Accelerator has been a key topic at SAS Innovate 2025, showcasing innovative ways to build agentic AI workflows. Developed by a team of SAS experts, this accelerator leverages SAS Viya products and capabilities to bring Generative AI into practical, governed use cases.

 

If you missed the sessions, here’s a quick introduction to what the accelerator offers:

 

 

Why an Accelerator?

 

The SAS Agentic AI Accelerator can help companies adopt Generative AI in a structured and agile way, drastically reducing the time from prototype to production. Key benefits include:

 

  • Enabling business users to create Generative AI use cases using low/no-code solutions.
  • Integrating external large language models (LLMs) such as OpenAI’s GPT, Google’s Gemini, or open-source models from Hugging Face into workflows and agents.
  • Combining non-deterministic LLMs with deterministic models created in SAS or Python.
  • Governing and controlling LLM usage in workflows for secure deployment and effective monitoring.

 

SAS Agentic AI Accelerator Components

 

  • A foundational model repository in SAS Model Manager.
  • A prompt builder user interface.
  • A prompt model repository.
  • Agentic AI workflows in SAS Intelligent Decisioning.
  • Monitoring dashboards in SAS Visual Analytics.
  • Scripts and custom steps to interact with LLMs.

 

The SAS Agentic AI Accelerator is evolving rapidly, with components being added, updated, or removed as we speak.

 

Building Agentic AI Workflows

 

To create your own workflows using the SAS Agentic AI Accelerator, follow these steps:

 

  1. Set up your SAS Viya environment: Ensure your environment is configured for the accelerator.
  2. Register and publish models: Use scripts to register models in SAS Model Manager and publish them from here.
  3. Deploy and score models: Prepare models for execution in target environments.
  4. Build workflows: Design agentic AI workflows using SAS Intelligent Decisioning.
  5. Deploy workflows: Publish and score workflows for production use.
  6. Integrate into applications: Embed workflows into enterprise systems.
  7. Monitor usage: Keep track of workflow performance and effectiveness.

 

 

1. SAS Viya Environment

 

For demonstration purposes, the SAS Viya Enterprise 2025.03 stable and LTS versions were used, deployed on Azure Cloud with configurations such as Python, Kaniko (for SAS Container Runtime), and Azure publishing destinations.

 

While fine-tuning the environment for model publishing can be challenging, it’s entirely achievable.

 

 

2. Register and Publish Models

 

The SAS Agentic AI Accelerator includes a code repository that simplifies model registration and publishing. Large Language Models (LLMs) wrapped in code can be registered as models in SAS Model Manager. Registration from the Git repository is facilitated by a script.

 

 

Code Repositories

 

  • SAS_LLM_UCF (owner: David Weik). The primary and most up-to-date repository. The repository is private, available only to SAS employees. If/when it is offered publicly, I will post an update.
  • Agentic AI - How to with SAS Viya (owner: Education (GEL)). A subset of the original repository designed for educational workshops is available with the workshop.

 

Code-Wrapped Large Language Models

 

Code wrappers serve as deployment instructions for Large Language Models (LLMs). When wrapped in code, LLMs can be registered as models in SAS Model Manager. These wrappers standardize inputs and outputs, making it easier to integrate or replace models in workflows, regardless of their type or source.

 

LLM_Model_Project.png

 

 

Why Code Wrappers Matter

 

By standardizing inputs and outputs, code wrappers simplify the deployment process and ensure consistency and reusability across workflows. However, registered models cannot be scored directly in SAS Viya. They must first be deployed, typically in a container, such as a Docker environment, to enable execution and scoring.

 

gpt4oMiniScoreCode.png

 

Governance

 

Models registered in SAS Model Manager are governed, ensuring:

 

  • Version control.
  • Permission settings for access and usage.
  • Tracking of model publishing destinations.
  • Detailed documentation through model cards.

 

Publishing Models

 

When models are published to a container destination, such as Azure, the code wrappers are transformed into Docker images. These images are portable, allowing deployment across different cloud platforms or on-premises environments.

 

AzureContainerRegistryImages.png

 

 

Publishing also enables models to be used via REST APIs, which are essential for scalable integration into agentic AI workflows. REST APIs facilitate real-time communication between systems, ensuring seamless interaction with enterprise applications.

 

While this post provides an overview of model publishing, we’ll dive deeper into the deployment process in a future article.

Read SAS Agentic AI – Deploy and Score Models – The Big Picture where we explore how to deploy and score code-wrapped Large Language Models (LLMs) in Azure.

 

 

Conclusion

 

The SAS Agentic AI Accelerator simplifies the integration of Large Language Models into workflows through the use of code wrappers. These wrappers provide standardized inputs and outputs, allowing models to be registered, governed, and published as Docker images for deployment across various platforms.

 

With the capabilities of SAS Model Manager, you can:

 

  • Govern models effectively with version control and permissions.
  • Publish models to containers for scalable, enterprise-grade applications.

 

Acknowledgement

 

Special thanks to:

 

  • David Weik for invaluable insights and explanations.
  • Xin Ru Lee (@XinRu) for sharing and assistance on numerous occasions.

 

Workshop Environment

Agentic AI – How to with SAS Viya workshop is now available on learn.sas.com to SAS Customers and SAS Employees. This workshop environment provides step-by-step guidance and a bookable environment for creating agentic AI workflows. For SAS Customers, the workshop is available in the SAS Decisioning Learning Subscription.

 

Additional Resources

 

 

If you liked the post, give it a thumbs up! Please comment and tell us what you think about the AI Decisioning Assistant. For further guidance, reach out for assistance. Let us know how this solution works for you!

 

 

Find more articles from SAS Global Enablement and Learning here.

Version history
Last update:
a month ago
Updated by:

hackathon24-white-horiz.png

The 2025 SAS Hackathon Kicks Off on June 11!

Watch the live Hackathon Kickoff to get all the essential information about the SAS Hackathon—including how to join, how to participate, and expert tips for success.

YouTube LinkedIn

SAS AI and Machine Learning Courses

The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.

Get started

Article Tags