Balancing Multi-Domain Corpora Learning for Open-Domain Response Generation

3Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

Open-domain conversational systems are assumed to generate equally good responses on multiple domains. Previous work achieved good performance on the single corpus, but training and evaluating on multiple corpora from different domains are less studied. This paper explores methods of generating relevant responses for each of multiple multi-domain corpora. We first examine interleaved learning which intermingles multiple corpora as the baseline. We then investigate two multidomain learning methods, labeled learning and multi-task labeled learning, which encode each corpus through a unique corpus embedding. Furthermore, we propose Domainspecific Frequency (DF), a novel word-level importance weight that measures the relative importance of a word for a specific corpus compared to other corpora. Based on DF, we propose weighted learning, a method that integrates DF to the loss function. We also adopt DF as a new evaluation metric. Extensive experiments show that our methods gain significant improvements on both automatic and human evaluation. We share our code and data for reproducibility.

Cite

CITATION STYLE

APA

Xing, Y., Cai, J., Barlaug, N., Liu, P., & Gulla, J. A. (2022). Balancing Multi-Domain Corpora Learning for Open-Domain Response Generation. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 2104–2120). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.162

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free