Introduction
SAS Model Manager allows users to publish, score test, and performance monitor any version of a registered model. By adding model version as a parameter in python-sasctl (specifically through the use of functions publish_model, create_performance_definition, and create_score_definition), users can have the same version options in python-sasctl as they do in SAS Model Manager.
This article will show how to publish, score test, and performance monitor specific model versions as string UUIDs, string names, or dictionaries from python-sasctl.
1) Choose which model version to use
You can view a model’s version history within python-sasctl or through the Model Manager UI. If you call list_model_versions, it retrieves all of the previous versions of the model specified, whereas calling get_model_with_versions retrieves the current model version along with all of the previous model versions. The items in the returned list are each a dictionary representation of a unique model version, and you can access the version name and the version UUID by doing <version item>.id and <version item>.modelVersionName, respectively.
Note: You will need to start a Viya session and import ModelRepository to use the functions above. You will also need to import ScoreDefinitions and ModelManagement to publish, score test, and performance monitor models.
In the Model Manager UI, you can navigate to the Models tab, select the model you are working with, and then select the Versions tab. The numbers listed in the Versions column are another way to view the model version names.
2) Add in the model version parameter
When score testing or publishing a specific model version, you can pass in the version UUID, the version name, or the dictionary representation of the model version as the model version parameter. The function then returns a response which you can use to check if the API call worked. Alternatively, you can check the UI to see if the right model was score tested or published.
Calling the functions:
The /modelPublish log url can be inserted into the end of your server url to view the log:
Make sure the name you pass as an argument into publish_model is unique, otherwise the log will show an error and publishing will not work.
Seeing scoring in the UI:
Seeing the published model version in the UI:
Creating a performance definition with the model version parameter is different because you can performance monitor multiple models at once. If you are using multiple models, pass an array of their version names or as a dictionary. Make sure the versions are listed in the same order as the models, so each model is paired with its correct version.
Calling the function and returned response:
To see the change in the UI, you'll need to go to Projects, select the project the model(s) you are monitoring are in, select the Performance tab, click the three vertical dots in the right corner, and choose 'View job history':
You can also pass in an array with a mix of version names and version dictionaries, and it will work the same as long as the order matches.
Note: To performance monitor or score test models, the models must be part of a project. In addition, the target variable and champion model must be defined, and the output variables must be mapped for performance monitoring to work.
Conclusion
Now you can create a score definition and performance definition that can later be executed as well as publish an older model version. This added model version functionality allows data scientists more flexibility in how they use their models within python-sasctl.
... View more