Guiding symbolic natural language grammar induction via transformer-based sequence probabilities

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A novel approach to automated learning of syntactic rules governing natural languages is proposed, based on using probabilities assigned to sentences (and potentially longer word sequences) by transformer neural network language models to guide symbolic learning processes like clustering and rule induction. This method exploits the learned linguistic knowledge in transformers, without any reference to their inner representations; hence, the technique is readily adaptable to the continuous appearance of more powerful language models. We show a proof-of-concept example of our proposed technique, using it to guide unsupervised symbolic link-grammar induction methods drawn from our prior research.

Cite

CITATION STYLE

APA

Goertzel, B., Suárez-Madrigal, A., & Yu, G. (2020). Guiding symbolic natural language grammar induction via transformer-based sequence probabilities. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12177 LNAI, pp. 153–163). Springer. https://doi.org/10.1007/978-3-030-52152-3_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free