We focus on the recognition of Dyck-n (Dn) languages with self-attention (SA) networks, which has been deemed to be a difficult task for these networks. We compare the performance of two variants of SA, one with a starting symbol (SA+) and one without (SA−). Our results show that SA+ is able to generalize to longer sequences and deeper dependencies. For D2, we find that SA− completely breaks down on long sequences whereas the accuracy of SA+ is 58.82%. We find attention maps learned by SA+ to be amenable to interpretation and compatible with a stack-based language recognizer. Surprisingly, the performance of SA networks is at par with LSTMs, which provides evidence on the ability of SA to learn hierarchies without recursion.
CITATION STYLE
Ebrahimi, J., Gelda, D., & Zhang, W. (2020). How can self-attention networks recognize Dyck-n languages? In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 4301–4306). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.384
Mendeley helps you to discover research relevant for your work.