Abstract
The aim of this paper is to investigate how much the effectiveness of a Question Answering (QA) system was affected by the performance of Machine Translation (MT) based question translation. Nearly 200 questions were selected from TREC QA tracks and ran through a question answering system. It was able to answer 42.6% of the questions correctly in a monolingual run. These questions were then translated manually from English into Arabic and back into English using an MT system, and then re-applied to the QA system. The system was able to answer 10.2% of the translated questions. An analysis of what sort of translation error affected which questions was conducted, concluding that factoid type questions are less prone to translation error than others.
Cite
CITATION STYLE
Al-Maskari, A., & Sanderson, M. (2006). The Affect of Machine Translation on the Performance of Arabic-English QA System. In EACL 2006 - 11th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Workshop on Multilingual Question Answering, MLQA 2006 (pp. 9–14). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1708097.1708100
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.