Objectives: This study evaluated and compared a variety of active learning strategies, including a novel strategy we proposed, as applied to the task of filtering incorrect semantic predications in SemMedDB. Materials and methods: We evaluated 8 active learning strategies covering 3 types-uncertainty, representative, and combined-on 2 datasets of 6,000 total semantic predications from SemMedDB covering the domains of substance interactions and clinical medicine, respectively. We also designed a novel combined strategy called dynamic β that does not use hand-tuned hyperparameters. Each strategy was assessed by the Area under the Learning Curve (ALC) and the number of training examples required to achieve a target Area Under the ROC curve. We also visualized and compared the query patterns of the query strategies. Results: All types of active learning (AL) methods beat the baseline on both datasets. Combined strategies outperformed all other methods in terms of ALC, outperforming the baseline by over 0.05 ALC for both datasets and reducing 58% annotation efforts in the best case. While representative strategies performed well, their performance was matched or outperformed by the combined methods. Our proposed AL method dynamic b shows promising ability to achieve near-optimal performance across 2 datasets. Discussion: Our visual analysis of query patterns indicates that strategies which efficiently obtain a representative subsample perform better on this task. Conclusion: Active learning is shown to be effective at reducing annotation costs for filtering incorrect semantic predications from SemMedDB. Our proposed AL method demonstrated promising performance.
CITATION STYLE
Vasilakes, J., Rizvi, R., Melton, G. B., Pakhomov, S., & Zhang, R. (2018). Evaluating active learning methods for annotating semantic predications. JAMIA Open, 1(2), 275–282. https://doi.org/10.1093/jamiaopen/ooy021
Mendeley helps you to discover research relevant for your work.