2 weeks ago
Crystal_Baker
SAS Employee
Member since
09-12-2022
- 3 Posts
- 5 Likes Given
- 0 Solutions
- 0 Likes Received
-
Latest posts by Crystal_Baker
Subject Views Posted 298 3 weeks ago 804 04-23-2024 10:27 AM 773 02-23-2024 04:50 PM -
Activity Feed for Crystal_Baker
- Posted Prompt, Generate, Repeat: Using Azure OpenAI In-Context Learning Custom Step in SAS Studio on SAS Communities Library. 3 weeks ago
- Liked Using Azure OpenAI GPT Models in SAS Viya for SophiaRowland. 07-02-2024 10:51 AM
- Liked What's New with SAS Model Manager? Simplified workflow-based KPI alerting for dishaw. 05-14-2024 04:40 PM
- Posted Why Do I Need SAS Intelligent Decisioning and SAS Model Manager to Achieve Analytics Success? on Ask the Expert. 04-23-2024 10:27 AM
- Posted How Do I Use SAS® Intelligent Decisioning to Its Full Potential? Q&A, Slides and On-Demand Recording on Ask the Expert. 02-23-2024 04:50 PM
- Liked How to Create Your Custom LangChain Agent for SAS Viya for Bogdan_Teleuca. 02-12-2024 01:22 PM
- Liked Creating Custom Functions in SAS Intelligent Decisioning for TwandaBaker. 11-15-2023 01:19 PM
- Liked 2nd Place Winner - 2023 Customer Awards: Taurex Drill Bits - Rookie of the Year for DefaultDevers. 06-16-2023 11:45 AM
-
Posts I Liked
Subject Likes Author Latest Post 5 3 4 3 56 -
My Library Contributions
Subject Likes Author Latest Post 2 0 1
3 weeks ago
2 Likes
Large Language Models (LLMs) have rapidly become a powerful tool in the data and analytics world. With just a simple instruction, LLMs can execute and array of tasks such as summarization, translation, question-answering, and much, much more. Apart from their pre-existing knowledge, which has been distilled during a training process, LLMs also exhibit a behavior known as in-context learning, which refers to their ability to use the context provided to inform their response. This kind of in-context learning means that the LLM doesn't need to be fine-tuned to commit these tasks but rather it simply responds based on the instructions, examples, and context given at the time of execution.
To facilitate the interaction with an LLM and to take advantage of In-context learning within SAS Viya, we have developed a SAS Studio Custom Step, a low-code component that enables users to complete specific tasks in a reusable and streamlined manner. The LLM - Azure OpenAI In-context Learning custom step (available on a GitHub repository at https://github.com/SundareshSankaran/LLM-Azure-OpenAI-In-context-Learning ) helps you interact with your LLM to perform tasks based on your input data. All while using the UI to help guide your prompt engineering techniques. Specifically, this custom step supports in-context learning, where illustrative examples can be included in the prompt to guide the model's response. By adding it to your SAS Studio flows, you can quickly bring LLM capabilities to your analytics pipelines whether you are summarizing customer feedback, translating responses, or experimenting with ways to extract insights from your text data. And while this step focuses on completing a single task, it’s also a building block towards agentic systems where agents can reason, make decisions, and act within your workflow. This custom step helps pave that path, one prompt at a time.
Watch the short demo below to see the custom step in action.
Figure 1: Using LLM - Azure OpenAI In-context Learning custom step to summarize customer reviews.
Requirements
To use this step you will need the following requirements:
A SAS Viya 4 environment version 2025.02 or later
Python configured and available to your SAS environment.
The following Python packages installed:
OpenAI
Pandas
SWAT
Valid Azure OpenAI service with a large language model deployed. Refer here for instructions.
How to Get Started
Once you have all of the requirements, use the following steps to get started:
(Conditionally optional) Start a CAS session if you are using CAS tables.
Add a table to your flow
Keep in mind that a text column will serve as context in your prompt to your LLM. So, make sure that the table you choose has the appropriate text needed for your prompt.
Add the custom step to your flow.
Add the LLM – Azure OpenAI In-context Learning step into your flow from the Shared steps tab and connect the input table to the input port on the custom step.
Select your text column.
On the Parameters tab, select your text column that you’d like to use in your prompt.
Figure 2: Select text column to use as context within your prompt.
Write your prompt.
Provide a system prompt. System prompts typically set guidelines and instructions for the LLM
Figure 3: An example of a system prompt.
Provide a user prompt. Here, you’ll add your question or command for the LLM.
Figure 4: Example of a user prompt for summarization.
(Optional) Add examples to guide the model’s response. This is where in-context learning comes into play. You can include one example (one-shot prompting), a few examples (few-shot prompting), or none (zero-shot prompting) depending on how much guidance you want to give the model.
Figure 5: An illustrative example showing the LLM how to respond.
Choose whether to add your question to the output table.
If you'd like to include your user prompt as a column in the output table, check the checkbox next to Add question to output.
Set your model parameters.
Temperature: Controls how creative or focused the model's response is. Lower values generate more focused answers; higher values allow for more variation. Default is 1.
Top P: Sets a probability threshold for selecting the next word—another way to manage randomness in the response. Default is 1
Max Tokens: Limits the length of the model's output. As a rule of thumb, one token equals about four characters of English text. Default is none.
Frequency Penalty: Discourages repetition by penalizing words that appear multiple times in the response. Default is 0
Presence Penalty: Encourages novelty by applying a penalty to any word that has already been used, even once. Default is 0.
Figure 6: Output specifications related to the model parameters and the output table.
Configure your model connection.
Provide the name of your Azure OpenAI model deployment.
Provide the path to your API key file. This can be stored as a text file within the SAS server.
Provide your endpoint URL, model region, and API version.
Figure 7: Example of required details about your model needed for successful execution.
Connect the output table.
This table will contain all the original columns, the response from the LLM, and the user prompt if the Add question to output box is checked.
Run your flow and view the results.
This custom step makes it easy to integrate Azure OpenAI’s language models into your SAS Studio flows. Once configured, you can start generating your own LLM-driven insights based on your text data. Enhancements are on the way to provide even more flexibility in how you interact with LLMs, so stay tuned for future updates! Feel free to contact me with any questions or comments!
Some Useful Resources:
Download the step from our GitHub repo: LLM - Azure OpenAI In-context Learning
How to import a custom step
How to Build Flows with SAS Studio
... View more
- Find more articles tagged with:
- Agentic AI
- generative AI
- natural language processing
- SAS Studio Custom Steps
- SAS Viya
04-23-2024
10:27 AM
Watch this Ask the Expert session to learn how the integration between these SAS solutions helps MLOps engineers, data scientists and business users improve analytics success by working together.
Watch the Webinar
You will learn about:
How SAS® Viya® supports enterprise team collaboration to efficiently build AI and analytics systems.
Core functionality of SAS Model Manager for MLOps engineers, including registration, testing, governance, monitoring and integration with solutions such as SAS Intelligent Decisioning.
Core functionality of SAS Intelligent Decisioning for decision analysts, including rule building, ML/AL implementation, code support, orchestration and deployment.
The questions from the Q&A segment held at the end of the webinar are listed below and the slides from the webinar are attached.
Q&A
After model decay is detected, what are the next steps?
We didn't dive too deep into model decay in this example, but when we develop our machine learning models, they are just representations of a pattern that they detect in the world around us at a specific time point. Our world is ever changing, new things appear all the time, things phase out. So, when our models are trained to work with a specific pattern at a single point in time, it becomes less effective at prediction when that pattern is no longer useful. This is a process called model decay.
Within our SAS model management tool, we have capabilities for setting thresholds for sending tasks or notifications whenever your model decay doesn't meet that threshold, so that users can come in and decide what's next. Do they retrain their models in SAS Model Studio? Do they select a challenger model that looks like it's still performing as well as the champion or better? We do support performance monitoring for multiple models so you can share those challenger models side-by-side those champions. You can quickly see it's time to replace my production model with a challenger or decide to retire the project to build new models. There are a few different options that are available, but we do help with letting individuals know when it is time to address model decay. From there, we can replace production models when we go to deploy; there's a toggle to replace the model or replacing the model with the same name. Or we can just let Crystal know that it's time to swap out that model, as you've seen in the demonstration. We can even very quickly share our models. There's a share button, so we can even share it over teams if I want to speed things along.
What else can I do to ensure my decision flows leverage Responsible and Trustworthy AI best practices?
There are a few things that we can do to better make sure that our decisions fall into that responsible and trustworthy AI category. First, when we are using models within our decisions, we want to make sure that we're working with the data scientists or whoever is developing and maintaining those models. Because a large part of how our decisions depend on the outputs those models. So, we need to make sure that our models are up to date. If there needs to be retraining, retrain the models before our models get stale. In addition, we also want our decisions to be transparent and be able to explain how a decision was made. With SAS intelligent decisioning, there's a few ways we can do this. When we publish decisions, navigate to the Deployments tab to manage the different deployed decisions that we have. We can understand where they've been deployed to, who's deployed them, and the date of publishing. Also, with this, we can get a better understanding of how the decision is running and what it is using to make decisions by easily generating a report that helps us create better transparent decisions.
We can also track decisions with decision path analysis and rule-fired analysis. We can also track decisions with decision path analysis and rule-fired analysis and the output decision variables. With this data we can do analysis to better understand how decisions are made and ensure that we are making responsible decisions.
While developing category models and churn models, do you actually code or is it done by SAS Viya itself?
What's nice about SAS Viya is that it's almost like a choose your own adventure when it comes to how you develop your models. It can be yes code, no code, low code, or somewhere in between. For this particular example, these models that we use were developed using a tool called SAS Model Studio, which is a GUI (Graphical User Interface) based approach where you don't have to do any coding yourself. You can drag and drop nodes into a pipeline, you can use automated machine learning to develop that pipeline for you. You don't actually have to code at all to build out these models we used today. But that doesn't mean you can't code. Users can also code in SAS and Python or R to develop their models. We have a few different options for those users. It is nice and flexible so you can start to bring together skill sets across your organization, whether they can code and enjoy it and just want to stick with that method, or if they prefer to iterate very quickly using a drag and drop tool. There is a variety of different options for how these models can be developed.
How can I learn more?
Sophia: Besides joining us at SAS Innovate, both Model Manager and Intelligent Decisioning are very active on SAS communities. I post under the SAS Model Manager label whenever we have new features, and many of our experts in the community post interesting use cases they come across, and interesting problems they've solved. So, it's definitely worthwhile to subscribe to the SAS Model Manager label on SAS Communities.
Crystal: SAS Communities is always a good one for Model Manager or SAS Intelligent Decisioning. We have some blogs on the sas.com page and then also in SAS Communities. I also suggest taking a look at the previous because we have a lot of good tutorials or how to videos that show you how we move throughout either SAS Model Manager and SAS Intelligent Decisioning. The previous webinars can be especially helpful if you are new to either of these solutions.
SAS Model Manager Communities Link: SAS Communities: SAS Model Manager
SAS Intelligent Decisioning Communities Link: SAS Communities: Decisioning
SAS Ask the Expert Webinars: Ask the Expert - SAS Support Communities
Did you use logistic regression to develop the model?
In this example, the category model is a rules-based text model, and the churn model is a gradient boosting model, but logistic regression models and many other machine learning models are supported.
Recommended Resources
Essential Functions of SAS Intelligent Decisioning
Manage Models in SAS Viya Training
SAS Intelligent Decisioning Homepage
SAS Model Manager Homepage
SAS Viya Homepage
Please see additional resources in the attached slide deck.
Want more tips? Be sure to subscribe to the Ask the Expert board to receive follow up Q&A, slides and recordings from other SAS Ask the Expert webinars.
... View more
Labels:
02-23-2024
04:50 PM
1 Like
Watch this Ask the Expert session to learn the newer innovations of SAS Intelligent Decisioning and demonstrate how and why users can best capitalize on these features.
Watch the webinar
You will learn:
The innovative decisioning capabilities of SAS Intelligent Decisioning.
How to best apply these capabilities for your decision flows.
How SAS Intelligent Decisioning supports trustworthy AI.
This webinar is the second in a two part series. To register and watch part one click here.
The questions from the Q&A segment held at the end of the webinar are listed below and the slides from the webinar are attached.
Q&A
Is there a visualization tab of the tool? So, if we were to present our final findings to our stakeholders with some visualization, that'd be possible.
With SAS Visual Analytics, you can take your output data from your tests and scenarios and go directly to SAS Visual Analytics where you can explore and visualize the results. The image below shows where this action can be found in your tests.
The screen you just showed it's the test result, right? Before publishing?
Yes, these tests are done before publishing. After publishing the decision, a publishing validation test will be automatically generated where you can test your decision in its published destination.
Is this full potential a feature for the latest SAS version?
Correct. This demo is done in version Stable 2024.01
My agency has not yet adopted SAS Viya. We are still with SAS 9.4 Software. How can we use SAS Intelligent Decision Making? Can SAS work with those firms/agencies that still contract with you for SAS 9.4?
Yes. SAS still works with those on SAS 9.4. SAS Intelligent Decisioning is available. Please reach out to your SAS representative for a conversation.
Is it possible to use decision inside decision, flows inside flows, models inside flows and decisions?
Yes, you can add a decision node object to your decision flow to encapsulate the content from one decision into another decision.
How can I use SAS Intelligent Decisioning integrated with SAS CI360 engage direct?
SAS CI360 includes a system connector for SAS Intelligent Decisioning such that data can be passed from SAS CI360 to SAS Intelligent Decisioning, and decision results are made available from SAS Intelligent Decisioning to SAS CI360. Below are some examples of how you can use SAS CI360 and SAS Intelligent Decisioning.
Intelligent Decisioning & Open-Source Analytics with SAS CI360 [Financial Services Industry Demo]
Intelligent Decisioning & Open-Source Analytics with SAS CI360 [Hospitality Industry Demo]
Can we containerize decisions and flows?
Decisions can be published to container destinations such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and private Docker repositories.
While using containerized version of flows, models and decision, is it possible to use data base queries inside that containerized version of flows, models, and decisions?
Yes, for decisions published to container destinations, you can make queries to database types such as Oracle, PostgreSQL, and Microsoft SQL Server.
Recommended Resources
SAS Decisioning Home Page
Ask the Expert: How do I use SAS Intelligent Decisioning?
SAS Intelligent Decisioning Overview Page
Essential Functions of SAS Intelligent Decisioning
Please see additional resources in the attached slide deck.
Want more tips? Be sure to subscribe to the Ask the Expert board to receive follow up Q&A, slides and recordings from other SAS Ask the Expert webinars.
... View more
Labels: