Training objectives based on predictive coding have recently been shown to be very effective at learning meaningful representations from unlabeled speech. One example is Autoregressive Predictive Coding (Chung et al., 2019), which trains an autoregressive RNN to generate an unseen future frame given a context such as recent past frames. The basic hypothesis of these approaches is that hidden states that can accurately predict future frames are a useful representation for many downstream tasks. In this paper we extend this hypothesis and aim to enrich the information encoded in the hidden states by training the model to make more accurate future predictions. We propose an auxiliary objective that serves as a regularization to improve generalization of the future frame prediction task. Experimental results on phonetic classification, speech recognition, and speech translation not only support the hypothesis, but also demonstrate the effectiveness of our approach in learning representations that contain richer phonetic content.
CITATION STYLE
Chung, Y. A., & Glass, J. (2020). Improved speech representations with multi-target autoregressive predictive coding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2353–2358). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.213
Mendeley helps you to discover research relevant for your work.