With the release of SAS Viya 2024.07, a model card is now available in SAS Model Manager. The model card in SAS Model Manager was built to be like a nutrition label for AI models. What sets the SAS Model Card apart from previous model cards is the use of descriptive visuals, to make model cards accessible to all personas involved in the analytics process, including data scientists, data engineers, MLOPs engineers, managers, executives, risk managers, business analytics, end-users, and any other stakeholder with access to the SAS Viya environment. Additionally, most of our model card populates automatically as the model is developed, managed, and deployed in SAS Viya.
Model Card example
The first release of the model card supports classification and prediction models from various sources. As users and teams develop and manage their model within SAS Viya, more of the model card will automatically populate. The model card brings together information about the training data, model performance at the time of training, and model performance over time. So, for this article, we will review how to generate a complete model card for models from SAS’s no-code / low-code interface, SAS Model Studio. But fear not! This article is just the first of a series and we will focus on Python models in part 2.
The Model Card
The first step for making your model card is to register your model. From SAS Model Studio, you can register models from the Data Mining and Machine Learning pipelines in the Pipeline Comparison tab. Select the models you want to register, then open the options menu and click “Register models”. Once your models are registered, you can open your models in SAS Model Manager.
Registering a model in SAS Model Studio
The Model Card will appear as the first tab of a model instance inside SAS Model Manager when the model function is prediction or classification. If you’ve registered your model from SAS Model Studio, ensure that you are running SAS Viya 2024.07 (or later) and that your model has the function property listed as prediction or classification.
Model Card immediately after registration
Left-Hand Pane
The left-hand pane contains three sets of information: tags, the modeler, and the responsible party. The modeler will populate automatically for models registered from SAS Model Studio, The responsible party is the user or group responsible for the model and this field is populated at the project level. A link to the project is available near the top of the screen. Within the project, navigate to the Properties Tab and then the Model Usage section. You can update the responsible party here as well as any model usage fields.
Updating the Model Card left-hand pane
Overview
The Overview section provides an at-a-glance review of the model. It provides visuals of model health that are easy to understand and are supported by other sections of the card. This section reports on model performance during training and over time. It also reports on influential variables, variable privacy classifications, and completeness of the model card. Most of the data in the Overview section will be populated automatically for models coming from Model Studio, but there are areas that will require attention when first registered.
If you are noticing a blue warning about the thresholds for your training metrics, you can review and update the thresholds for action in the project properties under model evaluation. We encourage users to update these thresholds to ensure they are appropriate for their current use case, as thresholds for action may differ from one use case to the next. For the performance monitoring metrics to become available, complete the steps listed in the Model Audit section of this article. To ensure the variable privacy classifications come through, scroll down to the Data Summary and complete the steps listed in the corresponding section below. If you want the “No” to change to a “Yes” in the Limitations Documented block, then complete the limitations in the Model Usage section of the card, as outlined in the next section of this article. Finally, to see fairness metrics, before you ran your pipelines, you needed to select a variable to Assess for Bias in Model Studio.
This Overview section can help provide evidence that your model is in good health or direct your attention towards areas that need help.
Complete Model Card Overview section
Model Usage
The Model Usage section describes the intended usage, expected benefits, out-of-scope use cases, and limitations of the model. This section must be filled out manually at either the project or model level in the Properties tab. The values of the model usage are inherited from the project-level properties. However, model-specific information can be specified for each property value at the model level.
Updating the Model Card Model Usage section
Data Summary
The Data Summary section provides information about the training data from SAS Information Catalog. When registering your model from SAS Model Studio, the training data property should already be populated, but you may be prompted to run an analysis in the Data Summary section. If you see a button in this section, click it to run your analysis. Since the model card is intended to be shareable, you cannot use data stored within a personal CASUSER library.
Once the analysis is complete, you should see a summary courtesy of SAS Information Catalog. This summary contains the number of columns, number or rows, size, status, completeness of data, information privacy classifications, data tags, and data descriptions. If there are gaps in the description, tags, and status, then these can be corrected in SAS Information Catalog. Working with data in SAS Information Catalog may require advanced permissions, so work with your data engineers or data owners to ensure your data metadata is complete. Model card users can click the name of the dataset within this section and open the data within Information Catalog in a new tab and make changes quickly if they have the appropriate permissions. Having complete data about your data is a best practice for building trustworthy models!
Updating the Model Card Data Summary section
Model Summary
The Model Summary section examines the model’s performance at the time of training, which corresponds to the training donut charts in the Overview tab. The information in this tab is populated automatically when a model is registered from SAS Model Studio. The Model Summary section includes information about the model target, algorithm, development tool and version, various accuracy measures across training, testing, and validation splits, generalizability, and variable importance. If you’ve selected a variable in SAS Model Studio to assess for bias, those metrics will also appear in this tab. Overall, this tab provides a ton of information about how well the model was performing during training which can provide a baseline for monitoring model performance over time.
Complete Model Card Summary section
Model Audit
While the Model Summary section focuses on the model’s performance at the time of training, the Model Audit section reports on performance over time. The Model Audit section also provides a deeper dive into the performance monitoring donut charts in the Overview section.
The Model Audit section relies on two capabilities of SAS Model Manager: performance monitoring and Key Performance Indicator (KPI) alert rules. Performance monitoring reviews model performance over time in batch at user-defined time points. To create a performance monitoring report, you need data with the actual or ground-truth values, and SAS Model Manager graphs model performance over time. Without the ground-truth, you will only get metrics on data drift. You can create a performance monitoring report in a few moments from the project, as outlined in these steps or the demo video below:
Creating and running a performance monitoring definition
Next, you will set your KPI thresholds in the project properties under model evaluation. You can create a rule in just a few clicks, as outlined in the documentation or the quick demo below:
Creating KPI alert rules
Now, you can view the latest model accuracy, fairness, and model drift metrics against your thresholds on the latest run of performance monitoring.
Complete Model Card Model Audit section
Pulling it All Together
This demo video walks through creating a model and a complete model card:
Creating a complete Model Card
Now you should have everything you need to create a complete nutrition label for your model! Stay tuned for the next article about creating a model card for a Python model. Otherwise post your questions, feedback, and ideas in the comments below!
... View more