On the influence of the number of anomalous and normal examples in anomaly-based annotation errors detection

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Anomaly detection techniques were shown to help in detecting word-level annotation errors in read-speech corpora for text-to-speech synthesis. In this framework, correctly annotated words are considered as normal examples on which the detection methods are trained. Mis-annotated words are then taken as anomalous examples which do not conform to normal patterns of the trained detection models. As it could be hard to collect a sufficient number of examples to train and optimize an anomaly detector, in this paper we investigate the influence of the number of anomalous and normal examples on the detection accuracy of several anomaly detection models: Gaussian distribution based models, one-class support vector machines, and Grubbs’ test based model. Our experiments show that the number of examples can be significantly reduced without a large drop in detection accuracy.

Cite

CITATION STYLE

APA

Matoušek, J., & Tihelka, D. (2016). On the influence of the number of anomalous and normal examples in anomaly-based annotation errors detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9924 LNCS, pp. 326–334). Springer Verlag. https://doi.org/10.1007/978-3-319-45510-5_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free