Abstract
The CogNLP-Sheffield submissions to the CMCL 2021 Shared Task examine the value of a variety of cognitively and linguistically inspired features for predicting eye tracking patterns, as both standalone model inputs and as supplements to contextual word embeddings (XLNet). Surprisingly, the smaller pretrained model (XLNet-base) outperforms the larger (XLNet-large), and despite evidence that multi-word expressions (MWEs) provide cognitive processing advantages, MWE features provide little benefit to either model.
Cite
CITATION STYLE
Vickers, P., Wainwright, R., Madabushi, H. T., & Villavicencio, A. (2021). CogNLP-Sheffield at CMCL 2021 Shared Task: Blending Cognitively Inspired Features with Transformer-based Language Models for Predicting Eye Tracking Patterns. In CMCL 2021 - Workshop on Cognitive Modeling and Computational Linguistics, Proceedings (pp. 125–133). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.cmcl-1.16
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.