Does one-against-all or one-against-one improve the performance of multiclass classifications?

11Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

One-against-all and one-against-one are two popular methodologies for reducing multiclass classification problems into a set of binary classifications. In this paper, we are interested in the performance of both one-against-all and one-against-one for classification algorithms, such as decision tree, naïve bayes, support vector machine, and logistic regression. Since both one-against-all and oneagainst-one work like creating a classification committee, they are expected to improve the performance of classification algorithms. However, our experimental results surprisingly show that one-against-all worsens the performance of the algorithms on most datasets. One-against-one helps, but performs worse than the same iterations of bagging these algorithms. Thus, we conclude that both one-against-all and one-against-one should not be used for the algorithms that can perform multiclass classifications directly. Bagging is better approach for improving their performance. Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Cite

CITATION STYLE

APA

Eichelberger, R. K., & Sheng, V. S. (2013). Does one-against-all or one-against-one improve the performance of multiclass classifications? In Proceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013 (pp. 1609–1610). https://doi.org/10.1609/aaai.v27i1.8522

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free