Data augmentation seeks to manipulate the available data for training to improve the generalization ability of models. We investigate two data augmentation proxies, permutation and flipping, for neural dialog response selection task on various models over multiple datasets, including both Chinese and English languages. Different from standard data augmentation techniques, our method combines the original and synthesized data for prediction. Empirical results show that our approach can gain 1 to 3 recall-at-1 points over baseline models in both full-scale and small-scale settings.
CITATION STYLE
Du, W., & Black, A. W. (2018). Data Augmentation for Neural Online Chat Response Selection. In Proceedings of the 2018 EMNLP Workshop SCAI 2018: The 2nd International Workshop on Search-Oriented Conversational AI (pp. 52–58). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-5708
Mendeley helps you to discover research relevant for your work.