We present in this work a method for incorporating global context in long documents when making local decisions in sequence labeling problems like NER. Inspired by work in featurized log-linear models (Chieu and Ng, 2002; Sutton and McCallum, 2004), our model learns to attend to multiple mentions of the same word type in generating a representation for each token in context, extending that work to learning representations that can be incorporated into modern neural models. Attending to broader context at test time provides complementary information to pretraining (Gururangan et al., 2020), yields strong gains over equivalently parameterized models lacking such context, and performs best at recognizing entities with high TF-IDF scores (i.e., those that are important within a document).
CITATION STYLE
Jörke, M., Gillick, J., Sims, M., & Bamman, D. (2020). Attending to long-distance document context for sequence labeling. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 3692–3704). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.330
Mendeley helps you to discover research relevant for your work.