Abstract
One of the key factors of enabling machine learning models to comprehend and solve real-world tasks is to leverage multimodal data. Unfortunately, annotation of multimodal data is challenging and expensive. Recently, self-supervised multimodal methods that combine vision and language were proposed to learn multimodal representations without annotation. However, these methods often choose to ignore the presence of high levels of noise and thus yield sub-optimal results. In this work, we show that the problem of noise estimation for multimodal data can be reduced to a multimodal density estimation task. Using multimodal density estimation, we propose a noise estimation building block for multimodal representation learning that is based strictly on the inherent correlation between different modalities. We demonstrate how our noise estimation can be broadly integrated and achieves comparable results to state-of-the-art performance on five different benchmark datasets for two challenging multimodal tasks: Video Question Answering and Text-To-Video Retrieval. Furthermore, we provide a theoretical probabilistic error bound substantiating our empirical results and analyze failure cases. Code: https://github.com/elad-amrani/ssml.
Cite
CITATION STYLE
Amrani, E., Ben-Ari, R., Rotman, D., & Bronstein, A. (2021). Noise Estimation Using Density Estimation for Self-Supervised Multimodal Learning. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 8A, pp. 6644–6652). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i8.16822
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.