336-343 Maximum-margin framework for training data synchronization in large-scale hierarchical classification

8Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the context of supervised learning, the training data for large-scale hierarchical classification consist of (i) a set of input-output pairs, and (ii) a hierarchy structure defining parent-child relation among class labels. It is often the case that the hierarchy structure given a-priori is not optimal for achieving high classification accuracy. This is especially true for web-taxonomies such as Yahoo! directory which consist of tens of thousand of classes. Furthermore, an important goal of hierarchy design is to render better navigability and browsing. In this work, we propose a maximum-margin framework for automatically adapting the given hierarchy by using the set of input-output pairs to yield a new hierarchy. The proposed method is not only theoretically justified but also provides a more principled approach for hierarchy flattening techniques proposed earlier, which are ad-hoc and empirical in nature. The empirical results on publicly available large-scale datasets demonstrate that classification with new hierarchy leads to better or comparable generalization performance than the hierarchy flattening techniques. © Springer-Verlag 2013.

Cite

CITATION STYLE

APA

Babbar, R., Partalas, I., Gaussier, E., & Amini, M. R. (2013). 336-343 Maximum-margin framework for training data synchronization in large-scale hierarchical classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8226 LNCS, pp. 336–343). https://doi.org/10.1007/978-3-642-42054-2_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free