We investigate the problem of predicting variables of ordinal scale. This task is referred to as ordinal regression and is complementary to the standard machine learning tasks of classification and metric regression. In contrast to statistical models we present a distribution independent formulation of the problem together with uniform bounds of the risk functional. The approach presented is based on a mapping from objects to scalar utility values. Similar to Support Vector methods we derive a new learning algorithm for the task of ordinal regression based on large margin rank boundaries. We give experimental results for an information retrieval task: learning the order of documents w.r.t. an initial query. Experimental results indicate that the presented algorithm outperforms more naive approaches to ordinal regression such as Support Vector classification and Support Vector regression in the case of more than two ranks.
CITATION STYLE
Herbrich, R., Graepel, T., & Obermayer, K. (1999). Support Vector Learning for Ordinal Regression A Risk Formulation for Ordinal Regression. In Proceedings of the Ninth International Conference on Artificial Neural Networks (pp. 97–102). Edinburgh. Retrieved from http://www.herbrich.me/papers/icann99_ordinal.pdf
Mendeley helps you to discover research relevant for your work.