DialogConv: A Lightweight Fully Convolutional Network for Multi-view Response Selection

0Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Current end-to-end retrieval-based dialogue systems are mainly based on Recurrent Neural Networks or Transformers with attention mechanisms. Although promising results have been achieved, these models often suffer from slow inference or huge number of parameters. In this paper, we propose a novel lightweight fully convolutional architecture, called DialogConv, for response selection. DialogConv is exclusively built on top of convolution to extract matching features of context and response. Dialogues are modeled in 3D views, where DialogConv performs convolution operations on embedding view, word view and utterance view to capture richer semantic information from multiple contextual views. On the four benchmark datasets, compared with state-of-the-art baselines, DialogConv is on average about 8.5× smaller in size, and 79.39× and 10.64× faster on CPU and GPU devices, respectively. At the same time, DialogConv achieves the competitive effectiveness of response selection.

Cite

CITATION STYLE

APA

Liu, Y., Feng, S., Gao, W., Wang, D., & Zhang, Y. (2022). DialogConv: A Lightweight Fully Convolutional Network for Multi-view Response Selection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 12086–12098). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.828

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free