Execute Score Tests in Python and Enhancements in Automated Python Score Code Generation
- Article History
- RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Beyond the recent additions for supporting the model card, python-sasctl has also seen two new additions around scoring tests and automated score code generation. With the recent release of python-sasctl, users that leverage Python have even more functionality at their fingertips!
Create and Execute Score Tests from Python
SAS Model Manager is accessible to a variety of users, with many folks leveraging the user interface to complete tasks. But SAS Model Manager offers open APIs that many developers leverage to build on top of SAS Model Manager. This includes our python-sasctl team, who creates a Python interface to SAS Model Manager. Through this interface, Python users can already register their model, create model metadata, write model score code (we’ll talk more about this later in the article), publish models, and much more. This team continues to expand how users can interact with SAS Model Manager and in the latest release, they’ve added score testing to the list of actions available to users working solely in Python.
Within the Scoring tab of SAS Model Manager, users can select a model, a model version, and a data set, to create a scoring definition. Users can then run this definition to ensure their models are working as expected. And with the latest release of python-sasctl, users can define and execute scoring tests inside SAS Model Manager without leaving their Python notebook! Python-sasctl now includes two new functions:
- ScoreDefinitions.create_score_definition() to create a scoring definition.
- ScoreExecution.create_score_execution() to execute a scoring definition.
Once complete, the score test results are viewable in SAS Model Manager.
Users can now generate their score code, register their models, and test it inside SAS Model Manager ensuring that it runs as expected! But users aren’t limited to running just Python models using this method or running newly created score tests. Users can create and run scoring tests against any model in their project or just execute existing ones. For teams automating processes in Python, this opens the door to more testing functionality. To learn more about these new functions in python-sasctl, check out this notebook.
Score Code Generation Enhancements
Along the topic of score code, let’s dive deeper into what score code is and how it is used. Score code is not training code. Data scientists leverage training code to create their model using data tagged with known values for the target. The goal of training a model is to build an object that will take data as input, run calculations, and return a prediction for the target. When working in Python, this model is often saved as a pickle file. To score new data using the pickle file, someone must first load the pickle file to get the model back out. Then you send the new data to the model and see what the model predicts for the target.
Scoring new data using a model is a much easier process than training a model. And python-sasctl has been able to automatically generate scoring code for years following a simple structure:
- Take input data
- Load model
- Send input data through model
- Process and return output
But in the real-world, data can be messy and our model may be very finicky on the data that it will accept. So users will sometimes write their own score code to customize the data preprocessing going into the model, like so:
- Take input data
- Preprocess data to meet model’s expectations
- Load model
- Send input data through model
- Process and return output
And now instead of making these users write the whole score code themselves, the python-sasctl import model function now allows users to define what preprocessing needs to take place and pass it as a parameter to the function. This notebook provides an example for leveraging this new parameter.
With the recent release of python-sasctl, users can add preprocessing to their score code generation as well as confirm that their score code works as expected within SAS Model Manager without needing to leave their Python development environment.
What would you like to see next for python-sasctl? Let us know in the comments!
To learn more about Python-sasctl and SAS Model Manager, check out the following resources:
- MLOps Uncoiled: Python’s Path on SAS Viya with SAS Model Manager
- Python-sasctl GitHub
- Score Millions of Records in Minutes Using Python Models: Python Container Scoring Optimization
- MLOps for Pirates and Snakes: The Sasctl Packages for R and Python
- IDC MarketScape: Worldwide Machine Learning Operations Platforms 2022 Vendor Assessment
- ModelOps Explained: A Starter’s Guide to Deploying and Managing AI and Analytical Models
- SAS Viya & Open Source