On Neurons Invariant to Sentence Structural Changes in Neural Machine Translation

3Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a methodology that explores how sentence structure is reflected in neural representations of machine translation systems. We demonstrate our model-agnostic approach with the Transformer English-German translation model. We analyze neuron-level correlation of activations between paraphrases while discussing the methodology challenges and the need for confound analysis to isolate the effects of shallow cues. We find that similarity between activation patterns can be mostly accounted for by similarity in word choice and sentence length. Following that, we manipulate neuron activations to control the syntactic form of the output. We show this intervention to be somewhat successful, indicating that deep models capture sentence-structure distinctions, despite finding no such indication at the neuron level. To conduct our experiments, we develop a semi-automatic method to generate meaning-preserving minimal pair paraphrases (active-passive voice and adverbial clause-noun phrase) and compile a corpus of such pairs.

Cite

CITATION STYLE

APA

Patel, G., Choshen, L., & Abend, O. (2022). On Neurons Invariant to Sentence Structural Changes in Neural Machine Translation. In CoNLL 2022 - 26th Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 194–212). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.conll-1.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free