Signal in Noise: Exploring Meaning Encoded in Random Character Sequences with Character-Aware Language Models

1Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

Natural language processing models learn word representations based on the distributional hypothesis, which asserts that word context (e.g., co-occurrence) correlates with meaning. We propose that n-grams composed of random character sequences, or garble, provide a novel context for studying word meaning both within and beyond extant language. In particular, randomly generated character n-grams lack meaning but contain primitive information based on the distribution of characters they contain. By studying the embeddings of a large corpus of garble, extant language, and pseudowords using CharacterBERT, we identify an axis in the model's high-dimensional embedding space that separates these classes of n-grams. Furthermore, we show that this axis relates to structure within extant language, including word part-of-speech, morphology, and concept concreteness. Thus, in contrast to studies that are mainly limited to extant language, our work reveals that meaning and primitive information are intrinsically linked.

Cite

CITATION STYLE

APA

Chu, M. B., Desikan, B. S., Nadler, E. O., Sardo, R. L., Darragh-Ford, E., & Guilbeault, D. (2022). Signal in Noise: Exploring Meaning Encoded in Random Character Sequences with Character-Aware Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 7120–7134). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.492

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free