BookmarkSubscribeRSS Feed

Using Azure OpenAI GPT Models in SAS Viya

Started 3 weeks ago by
Modified 3 weeks ago by
Views 494

Azure OpenAI Service provides access to Large Language Models (LLMs) through REST API, their Python SDK, and a web-based interface in the Azure OpenAI studio. Various models are available out of the box, including GPT-4, GPT-4 Turbo with Vision, GPT-3.5-Turbo, and Embeddings model series. You can read more about Azure OpenAI Service here. To incorporate these models in SAS Intelligent Decisioning and SAS Model Manager, we just need to create a Python function and scoring code calling the REST API of a model deployed in Azure OpenAI. At the time of writing this notebook, the Python SDK does not support all calls to each model type, but that may change in the future.

 

Before you can leverage the model, you must create an Azure OpenAI Service resource and deploy your model. This tutorial walks through setting up an Azure OpenAI Service and deploying a model in a few simple steps. For this example, I’ve used GPT-3.5-Turbo, but not all models are available in all regions. Next, I recommend walking through this tutorial with REST as your preferred method. Importantly, save your endpoint, API-key, and deployment name. We will need them for the next steps.

 

I’ll outline the steps below, but the code for this example is available in this notebook.

 

Integration with SAS Model Manager

 

To run our model in SAS Model Manager, we need to write our score code and save it as a .py file. We also need to specify the inputs to our score code, the outputs the score code generates, and properties of the model. Then we can register it all into SAS Model Manager directly from our notebook using the python-sasctl package.

 

For our score code, we will need the following:

  1. An import statement for the requests package. The requests package allows us to call REST APIs from Python.
  2. A function definition. When defining our function, we must consider how we want users to interact with the model. Will they be able to pass their prompt directly to the function? Will they pass parameters that populate a prompt template? It is your function! Build it to suit your needs.
  3. An output statement so SAS Model Manager knows what output to expect.
  4. Within your function definition, you can add any pre-processing of your prompt before saving it into your data json object.
  5. Using a LLM may require a key, so select a key management strategy that works best for your use cases and security needs within the function definition. The example notebook demonstrates a few options for key management.  
  6. Within your function, make the POST request using your model’s URL, passing along the key and data json object, and save the output of the POST request to a variable.
  7.  Optionally, you can add any post-processing to the output of the REST API call before returning the output.

 

Thus, your score code may looking something like:

 

def score(prompt):
    "Output: output"
    
    url = 'YOUR-MODEL-ENDPOINT'

    k =  'YOUR-API-KEY'
    
    h =  {"Accept": "application/json", "Content-type": "application/json; charset=utf-8", "api-key": k}

    data = {"messages":[{"role": "user", "content": prompt}]}

    response = requests.post(url, json = data , headers = h )

    jsonResponse = response.json()   
    output = jsonResponse["choices"][0]['message']['content']

    return output

 

If you are working in a notebook, you can write the score code out to a file by adding this line to the top of your score code:

 

%%writefile YOUR-MODEL-NAME.py

 

Using the python-sasctl package, you can also generate the metadata for the model and register it to SAS Model Manager without needing to leave your Python development environment. After authenticating to the SAS Viya environment using python-sasctl, specify the input and output variables, create the model, and upload your score code file.

 

# Update these variables to match your project
project = '[INSERT-YOUR-PROJECT-NAME]'
model_name = '[INSERT-YOUR-MODEL-NAME]'
algorithm = '[INSERT-YOUR-LLM-ALGORITHM]'

# Specify input variables and output variables
inputvariables = [{'name': 'prompt', 'role': 'input', 'type': 'string', 'level': 'nominal', 'length': 500}]
outputvariables = [{'name': output, 'role': 'output', 'type': 'string', 'level': 'nominal', 'length': 500}]

# Create the model
model = mr.create_model(
    model=model_name,
    project=project,
    algorithm=algorithm,
    modeler=username,
    tool='Python 3',
    function = "Text Generation",
    score_code_type = 'Python',
    input_variables = inputvariables,
    output_variables = outputvariables
)

# Add score code
scorefile = mr.add_model_content(
    model,
    open('[INSERT-YOUR-MODEL-NAME].py', 'rb'),
    name='[INSERT-YOUR-MODEL-NAME].py',
    role='score'
)

 

Now, check your SAS Model Manager project and you have executable score code for your OpenAI model that can be run in a scoring test or deployed to MAS or container destinations!

 

Integration with SAS Intelligent Decisioning

 

We can use the Python model we just developed and registered to SAS Model Manager within our decision flow as a model. But, if you don’t have SAS Model Manager, no need to worry! We can leverage a Python code file in SAS Intelligent Decisioning instead. To run the model, we need to create an execute function that we will then copy and paste the code into the Python code files in SAS Intelligent Decisioning.

 

This function will look really similar to the function we wrote for SAS Model Manager. The difference being that we must add a line to note the requests packages as a Dependent Package.

 

import requests

def execute (prompt):
   'Output: output
   'DependentPackages: requests'
   
   url = '-YOUR-MODEL-ENDPOINT'

   k =  'YOUR-API-KEY'
   
   h =  {"Accept": "application/json", "Content-type": "application/json; charset=utf-8", "api-key": k}

   data = {"messages":[{"role": "user", "content": prompt}]}

   response = requests.post(url, json = data , headers = h )

   jsonResponse = response.json()   
   output = jsonResponse["choices"][0]['message']['content']

   return output

 

Now you can leverage your LLM within a decision flow. This is great for building decisioning processes that can use a LLM to generate text while incorporating rules, various data points, and other machine learning models.

 

Conclusion

 

Now, you are all set to leverage a LLM deployed in Azure OpenAI in SAS Model Manager or SAS Intelligent Decisioning where it can be managed with other models in your organizations, combined with business logic, orchestrated with other models, and deployed into destinations within SAS Viya or beyond using containers.

 

Want to learn more about LLMs & Generative AI and SAS Viya? Then check out these resources!

Version history
Last update:
3 weeks ago
Updated by:
Contributors

Ready to join fellow brilliant minds for the SAS Hackathon?

Build your skills. Make connections. Enjoy creative freedom. Maybe change the world. Registration is now open through August 30th. Visit the SAS Hackathon homepage.

Register today!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags