BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
mHunfalvay
Calcite | Level 5

Hello, In the SAS documentation on the SVM algorithms there is some missing explanation, Can you please help me to understand all the inputs to these algorithms, specifically:

  • Polynomial — K(u,v) = (uTv + 1)p with polynomial order p. The 1 is added in order to avoid zero-value entries in the Hessian matrix for large values of p .
    What is K?
    What is u,v on both sides of the equation?
    What is T?
     
     
    Thanks
1 ACCEPTED SOLUTION

Accepted Solutions
HarrySnart
SAS Employee

Hi @mHunfalvay You've not put the T or P as superscript, but I assume you mean the below:

HarrySnart_0-1643059809109.png

SVMs seek a maximum separating hyperplane between classification groups. Since data is rarely linearly separable,  SVMs project data into a higher dimensional space such that it becomes linearly separable. SVMs use a kernel trick to project data into the higher dimensional space without having to exactly calculate the projections.

 

K is referring to the kernel function 

 

u and v are vectors in the input space.

 

T refers to the transpose of vector u

 

This is only relevant to the calculation of the kernel trick, something done by the algorithm as it calculates the inner product of vectors u and v.

 

The key consideration is the tuning of the hyperparameters which control the fit of the SVM, such as the penalty and tolerance and polynomial degree of the kernel function.

 

For further information there are references included in the documentation: https://documentation.sas.com/doc/en/pgmsascdc/v_022/casactml/casactml_svm_references.htm?homeOnFail 

 

I hope this helps

Harry

View solution in original post

1 REPLY 1
HarrySnart
SAS Employee

Hi @mHunfalvay You've not put the T or P as superscript, but I assume you mean the below:

HarrySnart_0-1643059809109.png

SVMs seek a maximum separating hyperplane between classification groups. Since data is rarely linearly separable,  SVMs project data into a higher dimensional space such that it becomes linearly separable. SVMs use a kernel trick to project data into the higher dimensional space without having to exactly calculate the projections.

 

K is referring to the kernel function 

 

u and v are vectors in the input space.

 

T refers to the transpose of vector u

 

This is only relevant to the calculation of the kernel trick, something done by the algorithm as it calculates the inner product of vectors u and v.

 

The key consideration is the tuning of the hyperparameters which control the fit of the SVM, such as the penalty and tolerance and polynomial degree of the kernel function.

 

For further information there are references included in the documentation: https://documentation.sas.com/doc/en/pgmsascdc/v_022/casactml/casactml_svm_references.htm?homeOnFail 

 

I hope this helps

Harry

SAS Innovate 2025: Call for Content

Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 25. Read more here about why you should contribute and what is in it for you!

Submit your idea!

How to choose a machine learning algorithm

Use this tutorial as a handy guide to weigh the pros and cons of these commonly used machine learning algorithms.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 1 reply
  • 5797 views
  • 0 likes
  • 2 in conversation