Probabilistic language model has been widely used in the field of natural language processing and it should be based on a suitable data corpus. Limited data is a permanent problem of probabilistic language model. Original data corpus can no longer meet requirements as time goes by, the emergence of new terms and technical terminologies. Therefore, cross-domain training is often needed. In this paper, a probabilistic language model is built with filtering and linear interpolation based on N-grams of words. Then, for three different data domain corpus which each has unique word distribution, based on perplexity analysis, data corpus with similar word distribution can be got. With all this information, the accuracy of cross-domain probabilistic model can be promoted.
CITATION STYLE
Zhang, A. (2020). Effect on probabilistic language model for cross-domain corpus. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11691 LNAI, pp. 561–569). Springer. https://doi.org/10.1007/978-3-030-39431-8_54
Mendeley helps you to discover research relevant for your work.