Convex batch mode active sampling via α-relative pearson divergence

9Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Active learning is a machine learning technique that trains a classifier after selecting a subset from an unlabeled dataset for labeling and using the selected data for training. Recently, batch mode active learning, which selects a batch of samples to label in parallel, has attracted a lot of attention. Its challenge lies in the choice of criteria used for guiding the search of the optimal batch. In this paper, we propose a novel approach to selecting the optimal batch of queries by minimizing the a-relative Pearson divergence (RPE) between the labeled and the original datasets. This particular divergence is chosen since it can distinguish the optimal batch more easily than other measures especially when available candidates are similar. The proposed objective is a min-max optimization problem, and it is difficult to solve due to the involvement of both minimization and maximization. We find that the objective has an equivalent convex form, and thus a global optimal solution can be obtained. Then the subgradient method can be applied to solve the simplified convex problem. Our empirical studies on UCI datasets demonstrate the effectiveness of the proposed approach compared with the state-of-the-art batch mode active learning methods.

Cite

CITATION STYLE

APA

Wang, H., Du, L., Zhou, P., Shi, L., & Shen, Y. D. (2015). Convex batch mode active sampling via α-relative pearson divergence. In Proceedings of the National Conference on Artificial Intelligence (Vol. 4, pp. 3045–3051). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9618

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free