turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- SAS Communities Library
- /
- Support Vector Machine Models: Supervised Learning...

- Article History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Email to a Friend
- Printer Friendly Page
- Report Inappropriate Content

Labels:

In a previous post, I summarized the tree-related models. In this post, I'll explore support vector and factorization machine models.

**Support Vector Machine Models** (PROC SVMACHINE)

Support vector machines (SVMs or support vector networks) are supervised learning models that perform binary linear classification or, using the “kernel trick,” they can also perform non-linear classification. The kernel trick works by mapping the inputs into high-dimensional feature spaces. Support vector clustering (SVC) is an analogous learning model that is unsupervised.

An SVM model maps observations as points in a hyperplane (or a set of hyperplanes in a multi-dimensional space) so that the examples in separate categories are divided by the largest distance (i.e., the functional margin, i.e., the widest gap). New observations are mapped into that same space and predicted to belong to a category depending on which side of the gap they fall on.

- SAS PROC SVMACHINE uses linear or nonlinear kernels to classify observations. To obtain the global solution for SVM optimization problems, PROC SVMACHINE uses eithera primal-dual interior-point method with linear and polynomial kernels of degree 2 or 3

The SVMACHINE procedure supports the AUTOTUNE statement, allowing you to automatically find the best values for the penalty and polynomial degree hyperparameters.

**Factorized Machine Learning** (PROC FACTMAC)

Factorization Machines (FMs) are general predictors that can be thought of as SVMs with a polynomial kernel. In contrast to SVMs, FMs model all interactions between variables using factorized parameters. SVMs are useful because they can estimate interactions even in situations with a great deal of sparsity, i.e., low numbers of transactions and feedback data. Thus factorization machines work well for things like recommender systems. FMs can be optimized directly and the model parameters can be estimated directly without the need for support vectors in the solution. FMs can enforce a nonnegative constraint in cases where the factors should logically be positive, and tensor factorization can be applied to incorporate multiple attributes (beyond just 2).

The FACTMAC procedure supports the AUTOTUNE statement, allowing you to automatically find the best values for number of factors, learning step, and maximum iterations hyperparameters.