Budget Active Learning for Deep Networks

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the digital world unlabeled data is relatively easy to acquire but expensive to label even with use of domain experts. On the other hand, state-of-the-art Deep Learning methods are dependent on large labeled datasets for training. Recent works on Deep Learning focus on use of Active Learning (AL) with uncertainty for model training. Although most uncertainty AL selection strategies are very effective, they fail to take informativeness of the unlabeled instances into account and are prone to querying outliers. In order to address these challenges, we propose a Budget Active Learning (BAL) algorithm for Deep Networks that advances active learning methods in three ways. First, we exploit both the uncertainty and diversity of instance using uncertainty and correlation evaluation metrics. Second, we use a budget annotator to label high confidence instances, and simultaneously update the selection strategy. Third, we incorporate AL in Deep Networks and perform classifications on untrained and pretrained models with two classical and a plant-seedling sets of data while minimizing the prediction loss. Experimental results on the three datasets of varying sizes demonstrate the efficacy of the proposed BAL method over other state-of-the-art Deep AL methods.

Cite

CITATION STYLE

APA

Gikunda, P. K., & Jouandeau, N. (2021). Budget Active Learning for Deep Networks. In Advances in Intelligent Systems and Computing (Vol. 1250 AISC, pp. 488–504). Springer. https://doi.org/10.1007/978-3-030-55180-3_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free