Neural generation for Czech: Data and baselines

16Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

We present the first dataset targeted at end-to-end NLG in Czech in the restaurant domain, along with several strong baseline models using the sequence-to-sequence approach. While non-English NLG is under-explored in general, Czech, as a morphologically rich language, makes the task even harder: Since Czech requires inflecting named entities, delexicalization or copy mechanisms do not work out-ofthe-box and lexicalizing the generated outputs is non-trivial. In our experiments, we present two different approaches to this this problem: (1) using a neural language model to select the correct inflected form while lexicalizing, (2) a two-step generation setup: our sequence-to-sequence model generates an interleaved sequence of lemmas and morphological tags, which are then inflected by a morphological generator.

Cite

CITATION STYLE

APA

Dušek, O., & Jurcícek, F. (2019). Neural generation for Czech: Data and baselines. In INLG 2019 - 12th International Conference on Natural Language Generation, Proceedings of the Conference (pp. 563–574). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-8670

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free