Study of Some Distance Measures for Language and Encoding Identification

22Citations
Citations of this article
74Readers
Mendeley users who have this article in their library.

Abstract

To determine how close two language models (e.g., n-grams models) are, we can use several distance measures. If we can represent the models as distributions, then the similarity is basically the similarity of distributions. And a number of measures are based on information theoretic approach. In this paper we present some experiments on using such similarity measures for an old Natural Language Processing (NLP) problem. One of the measures considered is perhaps a novel one, which we have called mutual cross entropy. Other measures are either well known or based on well known measures, but the results obtained with them vis-avis one-another might help in gaining an insight into how similarity measures work in practice. The first step in processing a text is to identify the language and encoding of its contents. This is a practical problem since for many languages, there are no universally followed text encoding standards. The method we have used in this paper for language and encoding identification uses pruned character n-grams, alone as well augmented with word n-grams. This method seems to give results comparable to other methods.

Cite

CITATION STYLE

APA

Singh, A. K. (2006). Study of Some Distance Measures for Language and Encoding Identification. In COLING ACL 2006 - Linguistic Distances, Proceedings of the Workshop (pp. 63–72). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1641976.1641985

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free