Making each modality in multi-modal data contribute is of vital importance to learning a versatile multi-modal model. Existing methods, however, are often dominated by one or few of modalities during model training, resulting in sub-optimal performance. In this article, we refer to this problem as modality bias and attempt to study it in the context of multi-modal classification systematically and comprehensively. After stepping into several empirical analyses, we recognize that one modality affects the model prediction more just because this modality has a spurious correlation with instance labels. To primarily facilitate the evaluation on the modality bias problem, we construct two datasets, respectively, for the colored digit recognition and video action recognition tasks in line with the Out-of-Distribution (OoD) protocol. Collaborating with the benchmarks in the visual question answering task, we empirically justify the performance degradation of the existing methods on these OoD datasets, which serves as evidence to justify the modality bias learning. In addition, to overcome this problem, we propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned according to the training set statistics. Thereafter, we apply this method on 10 baselines in total to test its effectiveness. From the results on four datasets regarding the above three tasks, our method yields remarkable performance improvements compared with the baselines, demonstrating its superiority on reducing the modality bias problem.
CITATION STYLE
Guo, Y., Nie, L., Cheng, H., Cheng, Z., Kankanhalli, M., & Del Bimbo, A. (2023). On Modality Bias Recognition and Reduction. ACM Transactions on Multimedia Computing, Communications, and Applications, 19(3), 1–22. https://doi.org/10.1145/3565266
Mendeley helps you to discover research relevant for your work.