N. Ancona, R. Maglietta, and E. Stella (Italy)
Support Vector Machines, classification, sparse and dense data representations, matching pursuit, method of frame.
This paper focuses on the problem of how data represen tation influences the generalization error of kernel based learning machines like Support Vector Machines (SVM) for classification. Frame theory provides a well founded mathematical framework for representing data in many dif ferent ways. We analyze the effects of sparse and dense data representations on the generalization error of such learning machines measured by using leave-one-out error given a finite number of training data. We show that, in the case of sparse data representation, the generalization ca pacity of an SVM trained by using polynomial or Gaussian kernel functions is equal to the one of a linear SVM. We show that sparse data representations reduce the general ization error as long as the representation is not too sparse, as in the case of very large dictionaries. Very sparse rep resentations increase drastically the generalization error of kernel based methods. Dense data representations, on the contrary, reduce the generalization error also in the case of very large dictionaries. We use two different schemes for representing data in overcomplete Haar and Gabor dic tionaries, and measure SVM generalization error on bench mark data set.
Important Links:
Go Back