Bayesian Preference Learning for Multiclass Problems

T. Nakano, K. Emoto, and N. Inuzuka (Japan)


machine learning, Bayesian theorem, maximum a posterior probability, multiclass problem


There are some methods reducing a multiclass classification problem to two-class classification problems. Such methods do not learn direct classifier X → Y (X is an input data set, Y is a class set), but they learn an indirect relation, such as, X × Y 2 → {0, 1}. The technique is sometime used because of some advantages (expectation of improvement accuracy and availability of two-class learners). In this paper, we propose a reducing method using an idea to estimate learning error probabilities. We derived an equation from Bayesian theorem. We performed experiments of 18 UCI datasets. With seven datasets a proposing classifier showed superiority to C4.5, and no inferiority to it.

Important Links:

Go Back