Support Vector Machines and Regularization

V. Cherkassky and Y. Ma (USA)

Keywords

Function approximation, regularization, structural risk minimization

Abstract

Recently, there has been a growing interest in Statistical Learning Theory, aka VC theory, due to many successful applications of Support Vector Machines (SVMs). Even though most theoretical results in VC-theory (including all main concepts underlying SVM methodology) have been developed over 25 years ago, these concepts are occasionally misunderstood in the research community. This paper compares standard SVM regression and the regularization for learning dependencies from data. We point out that SVM approach has been developed in VC theory under risk minimization approach, whereas the regularization approach has been developed under function approximation setting. This distinction is especially important since regularization-based learning is often presented as a purely constructive methodology (with no clearly stated problem setting), even though original regularization theory has been introduced under clearly stated function approximation setting. Further, we present empirical comparisons illustrating the effect of different mechanisms for complexity control (i.e., ε insensitive loss vs standard ridge regression) on the generalization performance, under very simple settings using synthetic data sets. These comparisons suggest that the SVM approach to complexity control (via ε -loss) is more appropriate for learning under sparse high dimensional settings.

Important Links:



Go Back