Adaptive Convolutional Filter Generation for Natural Language Understanding

  • Shen D
  • Min M
  • Li Y
  • et al.
ArXiv: 1709.08294
Citations of this article
Mendeley users who have this article in their library.


Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP are not expressive enough, in the sense that all input sentences share the same learned (and static) set of filters. Motivated by this problem, we propose an adaptive convolutional filter generation framework for natural language understanding, by leveraging a meta network to generate input-aware filters. We further generalize our framework to model question-answer sentence pairs and propose an adaptive question answering (AdaQA) model; a novel two-way feature abstraction mechanism is introduced to encapsulate co-dependent sentence representations. We investigate the effectiveness of our framework on document categorization and answer sentence-selection tasks, achieving state-of-the-art performance on several benchmark datasets.




Shen, D., Min, M. R., Li, Y., & Carin, L. (2017). Adaptive Convolutional Filter Generation for Natural Language Understanding. ArXiv.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free