Visualizing the Relationship Between Encoded Linguistic Information and Task Performance

3Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Probing is popular to analyze whether linguistic information can be captured by a well-trained deep neural model, but it is hard to answer how the change of the encoded linguistic information will affect task performance. To this end, we study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality. Its key idea is to obtain a set of models which are Pareto-optimal in terms of both objectives. From this viewpoint, we propose a method to optimize the Pareto-optimal models by formalizing it as a multi-objective optimization problem. We conduct experiments on two popular NLP tasks, i.e., machine translation and language modeling, and investigate the relationship between several kinds of linguistic information and task performances. Experimental results demonstrate that the proposed method is better than a baseline method. Our empirical findings suggest that some syntactic information is helpful for NLP tasks whereas encoding more syntactic information does not necessarily lead to better performance, because the model architecture is also an important factor.

Cite

CITATION STYLE

APA

Xiang, J., Li, H., Lian, D., Huang, G., Watanabe, T., & Liu, L. (2022). Visualizing the Relationship Between Encoded Linguistic Information and Task Performance. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 410–422). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free