Abstract
Topic models have been successfully applied in lexicon extraction. However, most previous methods are limited to document-aligned data. In this paper, we try to address two challenges of applying topic models to lexicon extraction in non-parallel data: 1) hard to model the word relationship and 2) noisy seed dictionary. To solve these two challenges, we propose two new bilingual topic models to better capture the semantic information of each word while discriminating the multiple translations in a noisy seed dictionary. We extend the scope of topic models by inverting the roles of "word" and "document". In addition, to solve the problem of noise in seed dictionary, we incorporate the probability of translation selection in our models. Moreover, we also propose an effective measure to evaluate the similarity of words in different languages and select the optimal translation pairs. Experimental results using real world data demonstrate the utility and efficacy of the proposed models.
Cite
CITATION STYLE
Ma, T., & Nasukawa, T. (2017). Inverted bilingual topic models for lexicon extraction from non-parallel data. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 4075–4081). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/569
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.