There are many situations in supervised learning where the acquisition of data is very expensive and sometimes determined by a user’s budget. One way to address this limitation is active learning. In this study, we focus on a fixed budget regime and propose a novel active learning algorithm for the pool-based active learning problem. The proposed method performs active learning with a pre-trained acquisition function so that the maximum performance can be achieved when the number of data that can be acquired is fixed. To implement this active learning algorithm, the proposed method uses reinforcement learning based on deep neural networks as as a pre-trained acquisition function tailored for the fixed budget situation. By using the pre-trained deep Q-learning-based acquisition function, we can realize the active learner which selects a sample for annotation from the pool of unlabeled samples taking the fixed-budget situation into account. The proposed method is experimentally shown to be comparable with or superior to existing active learning methods, suggesting the effectiveness of the proposed approach for the fixed-budget active learning.
CITATION STYLE
Taguchi, Y., Hino, H., & Kameyama, K. (2021). Pre-Training Acquisition Functions by Deep Reinforcement Learning for Fixed Budget Active Learning. Neural Processing Letters, 53(3), 1945–1962. https://doi.org/10.1007/s11063-021-10476-z
Mendeley helps you to discover research relevant for your work.