We present novel models for domain adaptation based on the neural network joint model (NNJM). Our models maximize the cross enttopy by regularizing the loss function with respect to in-domain model. Domain adaptation is carried out by assigning higher weight to out-domain sequences that are similar to the in-domain data. In our alternative model we take a more restrictive approach by additionally penalizing sequences similar to the outdomain data. Our models achieve better perplexities than the baseline NNIM models and give improvements of up to 0.5 and 0.6 BLEU points in Arabic-to-English and English-to-German language pairs, on a standard task of translating TED talks.
CITATION STYLE
Joty, S., Sajjad, H., Durrani, N., Al-Mannai, K., Abdelali, A., & Vogel, S. (2015). How to avoid unwanted pregnancies: Domain adaptation using neural network models. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 1259–1270). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1147
Mendeley helps you to discover research relevant for your work.