Randomized deep structured prediction for discourse-level processing

3Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Expressive text encoders such as RNNs and Transformer Networks have been at the center of NLP models in recent work. Most of the effort has focused on sentence-level tasks, capturing the dependencies between words in a single sentence, or pairs of sentences. However, certain tasks, such as argumentation mining, require accounting for longer texts and complicated structural dependencies between them. Deep structured prediction is a general framework to combine the complementary strengths of expressive neural encoders and structured inference for highly structured domains. Nevertheless, when the need arises to go beyond sentences, most work relies on combining the output scores of independently trained classifiers. One of the main reasons for this is that constrained inference comes at a high computational cost. In this paper, we explore the use of randomized inference to alleviate this concern and show that we can efficiently leverage deep structured prediction and expressive neural encoders for a set of tasks involving complicated argumentative structures.

References Powered by Scopus

GloVe: Global vectors for word representation

26880Citations
N/AReaders
Get full text

Neural architectures for named entity recognition

2582Citations
N/AReaders
Get full text

Conditional random fields as recurrent neural networks

2146Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Hands-On Interactive Neuro-Symbolic NLP with DRaiL

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Widmoser, M., Pacheco, M. L., Honorio, J., & Goldwasser, D. (2021). Randomized deep structured prediction for discourse-level processing. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 1174–1184). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.100

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 13

57%

Researcher 7

30%

Lecturer / Post doc 2

9%

Professor / Associate Prof. 1

4%

Readers' Discipline

Tooltip

Computer Science 20

74%

Linguistics 4

15%

Agricultural and Biological Sciences 2

7%

Neuroscience 1

4%

Save time finding and organizing research with Mendeley

Sign up for free