turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- SAS Communities Library
- /
- How Viya 3.2 makes model assessment easy

- Article History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Email to a Friend
- Printer Friendly Page
- Report Inappropriate Content

Labels:

The new visual interface for SAS Visual Analytics, Visual Statistics and Visual Data Mining and Machine Learning 8.1 on Viya 3.2 provides a variety of model assessment graphs out-of-the-box. It also lets you easily compare different models graphically with just a couple of clicks to help you choose the best model.

Remember that we could create assessment graphs in VA/VS/VSMML on Viya 3.1 using the SAS Studio programming interface by writing code? My earlier post explains how to do that. If you want to do things the hard way (but where you have more control, and you can decide how to label graphs and how you want them to look), you can still do that.

Maybe you are that DIY gal who wants to build your own custom curtain rods out of wire and screw eyes and turnbuckles and ferrules, or you want to hand paint stripes onto your dining room walls, and maybe the easy way is not be for you. But if you are the person whose only tool you want at your house is the key to open the front door, you will love this!

Let’s review model assessment graphs: **Lift Charts**

A lift chart indicates how well the model did compared to no model. The lift is the ratio between the result predicted by the model and the result using no model. A lift chart plots lift on the vertical axis and depth (0 to 100%) on the horizontal axis. A lift of 3 indicates that your model predicted 3 times better than just selecting items at random.

Let’s say we have the resources to audit 100 tax payers, and we want to focus on those who cheat on their taxes. There are 1,000 tax payers total and 100 are cheating. If we select 100 at random to audit, we will only find about 10 cheaters. But what if we are able to build a model with a lift of 3? Now we can still audit only 100, but this time we will find about 30 of the cheaters! **Cumulative Lift Charts**

A cumulative lift chart is just another way of looking at lift and is also called a gains chart. Now instead of plotting lift on the vertical axis, we plot cumulative lift as a percent. In our tax cheating example, we know that if we audit 100% of taxpayers we will find 100% of the cheaters.

But this is very expensive. We see from the cumulative lift chart below that if we audit 10% of taxpayers at random (orange baseline line), we will likely get 10% of the cheaters. But with our model that has a lift of 3 at a depth of 10%, we see that if we audit 10% of the taxpayer, we will get 30% of the cheaters!** ** **ROC Curves**

An ROC (receiver operating characteristic) curve is another way to visually compare your model results. An ROC curve plots the true positive rate by the false positive rate. The false positive rate is also called sensitivity and the false positive rate is also equal to 1 minus specificity.

The best model will be the one whose curve shows up highest and farthest to the left on the graph, i.e., maximizing the true positive rate and minimizing the false positive rate. Above we see that the gradient boosting model performed best (green line), followed by the random forest model (red line).

The logistic model (blue line) performed the worst of the three models. Below is a picture of the ROC curve we created using SAS Studio on the old Viya 3.0. We used PROC SGPLOT and PROC SGPANEL ( see how).

We can also create ROC curves using Python with SWAT as shown in the screen capture below. Notice that when we do our own programming we have complete control. For example, we can spell “False Positive Rate” any way that we want to. So there. Nanny nanny boo boo.

**Misclassification Rate Graphs**

A misclassification rate graph is simply a bar chart showing your event (in this case cheating, or "1" in this screenshot) as one bar and non-event (not cheating, or "0" in this screen shot) as another bar. Each bar is a stacked bar where the color indicates how many observations were categorized correctly and how many found incorrectly by the model.

**Predicted vs Observed**

We can also simply look at the predicted (modeled) results versus the observed results.

**The Easy Way**

Now you can get these graphs with a click of a button! In VS/VDMML 8.1 on Viya 3.2 we can look at any appropriate assessment graphs instantly using the visual interface. For models using a categorical response variable, you can get a graph of lift (the default), cumulative lift, ROC curve or misclassification rate.

It is easy to switch the assessment graph by selecting the assessment graph and right clicking.

See how this works in the animated GIFs below for a Random Forest and a Logistic Regression. **Random Forest Model**

**Logistic Regression Model**

**Comparing Models** We can compare the different models using the **Model Comparison** object included as part of Visual Statistics.

In order for models to be compared, they must have the same data source, training and validation partition variable, response variable, event level, and group by.

The default graph for the **Model Comparison** object is a lift chart (as shown below), but you can change this to cumulative lift, ROC, or misclassification rate. The default fit statistic used to select the best model is the misclassification rate, but you can also change this in the **Options** pane.

Below I have changed the assessment graph to an ROC Curve and the Fit Statistic to the Kolmogorov-Smirnov statistic.

**Additional Model Comparison Aids**

In addition to graphs, there are many ways to compare like models, such as fit statistics.

To see details you can hover over the right top of your canvas to open the overflow menu and select the Explore icon as shown below. This "explore mode" displays a table below your graphs where you have information such as fit statistics for every model evaluated. The table can be easily exported if you would like.

If you have built two different regression models, you can compare them using model fit statistics such as the R-square, Akaike Information Criterion (AIC), the corrected AIC (AICC), F Value, Mean Square Error, Root Mean Square Error (RMSE), or Schwarz Bayesian Criterion. Again, these are readily available at a single click in SAS VS/VDMML on Viya 3.2.

For Generalized Linear Models, you can compare them using the Validation ASE (displayed by default). Again, with the click of a button, you can see instead the -2 Log Likelihood, Akaike Information Criterion (AIC), corrected AIC (AICC), Average Square Error (ASE) or Bayesian Information Criterion.

**Note:** There is sometimes confusion about the difference between the mean squared error (MSE) and ASE (average squared error). The MSE is the sum of squared errors (SSE) divided by the degrees of freedom for error (DFE) and is commonly used with linear regression. MSE is an unbiased estimate of the population noise variance under the usual assumptions.

For neural networks, however, there is no known unbiased estimator. Also, the degrees of freedom is often negative for neural networks.

The effective degrees of freedom can be approximated, but that may be prohibitively expensive and may be based on assumptions that are not met. Instead, the ASE is commonly used for neural networks. The ASE is sum of squares error (SSE) divided by the number of observations N, rather than by the degrees of freedom. This quantity, SSE/N, is referred to as the average squared error (ASE).

For more information on fit statistics and anything else that you need to know, the** Help** section in VA/VS/VDMML 8.1 on Viya 3.2 is very, well, helpful. Simply click on the question mark circle in the top right of your Visual Analytics screen (shown below). Then use the magnifying glass search icon to find information about whatever you have questions on.

Which assessment graphs are available depends on the model and on whether or not your response variable (target variable) is categorical or continuous. The following table shows which assessment graphs are available in the VS/VDMML 8.1 on Viya 3.2 visual interface for which models.

I used to be all about DIY (programming in my analogy here), but with Viya 3.2 I am a convert! No more DIY for standard assessment models! Rather than building them with the SAS Studio interface, I will let them be automatically generatedwith a click or two. I know that I have the programming ability still in my back pocket if I need to create something customized or out of the ordinary or if I would like to misspell my axes labels.

*The old DIY Beth wanted to custom code everything herself. Some might say "control freak."*

* The new turnkey Beth loves the VA/VS/VDMML 8.1 Visual Interface on Viya 3.2!*