Hi @mHunfalvay You've not put the T or P as superscript, but I assume you mean the below:
SVMs seek a maximum separating hyperplane between classification groups. Since data is rarely linearly separable, SVMs project data into a higher dimensional space such that it becomes linearly separable. SVMs use a kernel trick to project data into the higher dimensional space without having to exactly calculate the projections.
K is referring to the kernel function
u and v are vectors in the input space.
T refers to the transpose of vector u
This is only relevant to the calculation of the kernel trick, something done by the algorithm as it calculates the inner product of vectors u and v.
The key consideration is the tuning of the hyperparameters which control the fit of the SVM, such as the penalty and tolerance and polynomial degree of the kernel function.
For further information there are references included in the documentation: https://documentation.sas.com/doc/en/pgmsascdc/v_022/casactml/casactml_svm_references.htm?homeOnFail
I hope this helps
Harry
... View more