Domain-Informed Probing of wav2vec 2.0 Embeddings for Phonetic Features

19Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

In recent years large transformer model architectures have become available which provide a novel means of generating high-quality vector representations of speech audio. These transformers make use of an attention mechanism to generate representations enhanced with contextual and positional information from the input sequence. Previous works have explored the capabilities of these models with regard to performance in tasks such as speech recognition and speaker verification, but there has not been a significant inquiry as to the manner in which the contextual information provided by the transformer architecture impacts the representation of phonetic information within these models. In this paper, we report the results of a number of probing experiments on the representations generated by the wav2vec 2.0 model's transformer component, with regard to the encoding of phonetic categorization information within the generated embeddings. We find that the contextual information generated by the transformer's operation results in enhanced capture of phonetic detail by the model, and allows for distinctions to emerge in acoustic data that are otherwise difficult to separate.

Cite

CITATION STYLE

APA

English, P. C., Kelleher, J. D., & Carson-Berndsen, J. (2022). Domain-Informed Probing of wav2vec 2.0 Embeddings for Phonetic Features. In SIGMORPHON 2022 - 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, Proceedings of the Workshop (pp. 83–91). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.sigmorphon-1.9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free