BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
JBerry
Quartz | Level 8

In E-Miner, I see 2 selections in the Model Comparison node:

 

- Average Squared Error

- Mean Squared Error

 

What is the difference?

 

Searching the HELP does not yield any decent description:

 

----->8  snip from HELP file ------------------------------
The Selection Statistic choices are as follows:
Default — The default selection uses different statistics based on the type of target variable and whether a profit/loss matrix has been defined.
If a profit/loss matrix is defined for a categorical target, the average profit or average loss is used.
If no profit/loss matrix is defined for a categorical target, the misclassification rate is used.
If the target variable is interval, the average squared error is used.
Akaike's Information Criterion — chooses the model with the smallest Akaike's Information Criterion value.
Average Squared Error — chooses the model with the smallest average squared error value.
Mean Squared Error — chooses the model with the smallest mean squared error value.

 

----->8  snip from HELP file ------------------------------

 

 

 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions
JBerry
Quartz | Level 8
I answered my own question. In a different part of the HELP section is this description:

In linear models, statisticians routinely use the mean squared error (MSE) as the main measure of fit. The MSE is the sum of squared errors (SSE) divided by the degrees of freedom for error. (DFE is the number of cases less the number of weights in the model.) This process yields an unbiased estimate of the population noise variance under the usual assumptions.
For neural networks and decision trees, there is no known unbiased estimator. Furthermore, the DFE is often negative for neural networks. There exist approximations for the effective degrees of freedom, but these are often prohibitively expensive and are based on assumptions that might not hold. Hence, the MSE is not nearly as useful for neural networks as it is for linear models. One common solution is to divide the SSE by the number of cases N, not the DFE. This quantity, SSE/N, is referred to as the average squared error (ASE).

View solution in original post

1 REPLY 1
JBerry
Quartz | Level 8
I answered my own question. In a different part of the HELP section is this description:

In linear models, statisticians routinely use the mean squared error (MSE) as the main measure of fit. The MSE is the sum of squared errors (SSE) divided by the degrees of freedom for error. (DFE is the number of cases less the number of weights in the model.) This process yields an unbiased estimate of the population noise variance under the usual assumptions.
For neural networks and decision trees, there is no known unbiased estimator. Furthermore, the DFE is often negative for neural networks. There exist approximations for the effective degrees of freedom, but these are often prohibitively expensive and are based on assumptions that might not hold. Hence, the MSE is not nearly as useful for neural networks as it is for linear models. One common solution is to divide the SSE by the number of cases N, not the DFE. This quantity, SSE/N, is referred to as the average squared error (ASE).

sas-innovate-2024.png

Don't miss out on SAS Innovate - Register now for the FREE Livestream!

Can't make it to Vegas? No problem! Watch our general sessions LIVE or on-demand starting April 17th. Hear from SAS execs, best-selling author Adam Grant, Hot Ones host Sean Evans, top tech journalist Kara Swisher, AI expert Cassie Kozyrkov, and the mind-blowing dance crew iLuminate! Plus, get access to over 20 breakout sessions.

 

Register now!

How to choose a machine learning algorithm

Use this tutorial as a handy guide to weigh the pros and cons of these commonly used machine learning algorithms.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 1 reply
  • 23408 views
  • 9 likes
  • 1 in conversation