RNN Based Language Generation Models for a Hindi Dialogue System

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Natural Language Generation (NLG) is a crucial component of a Spoken Dialogue System. Its task is to generate utterances with intended attributes like fluency, variation, readability, scalability and adequacy. As the handcrafted models are rigid and tedious to build, people have proposed many statistical and deep-learning based models to bring about more suitable options for generating utterance on a given Dialogue-Act (DA). This paper presents some Recurrent Neural Network Language Generation (RNNLG) framework based models along with their analysis of how they extract intended meaning in terms of content planning (modelling semantic input) and surface realization (final sentence generation) on a proposed unaligned Hindi dataset. The models have shown consistent performance on our natively developed dataset where the Modified-Semantically-Controlled LSTM (MSC-LSTM) performs better than all in terms of total slot-error (T-Error).

Cite

CITATION STYLE

APA

Singh, S., Malviya, S., Mishra, R., Barnwal, S. K., & Tiwary, U. S. (2020). RNN Based Language Generation Models for a Hindi Dialogue System. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11886 LNCS, pp. 124–137). Springer. https://doi.org/10.1007/978-3-030-44689-5_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free