We investigate the problem of predicting variables of ordinal scale. This task is referred to as ordinal regression and is complementary to the standard machine learning tasks of classification and metric regression. In contrast to statistical models we present a distribution independent formulation of the problem together with uniform bounds of the risk functional. The approach presented is based on a mapping from objects to scalar utility values. Similar to Support Vector methods we derive a new learning algorithm for the task of ordinal regression based on large margin rank boundaries. We give experimental results for an information retrieval task: learning the order of documents with respect to an initial query. Experimental results indicate that the presented algorithm outperforms more naive approaches to ordinal regression such as Support Vector classification and Support Vector regression in the case of more than two ranks.
Herbrich, R., Graepel, T., & Obermayer, K. (1999). Support vector learning for ordinal regression. In IEE Conference Publication (Vol. 1, pp. 97–102). IEE. https://doi.org/10.1049/cp:19991091
Mendeley helps you to discover research relevant for your work.