Out-of-distribution (OOD) detection is essential for reliable and trustworthy machine learning. Recent multi-modal OOD detection leverages textual information from in-distribution (ID) class names for visual OOD detection, yet it currently neglects the rich contextual information of ID classes. Large language models (LLMs) encode a wealth of world knowledge and can be prompted to generate descriptive features for each class. Indiscriminately using such knowledge causes catastrophic damage to OOD detection due to LLMs' hallucinations, as is observed by our analysis. In this paper, we propose to apply world knowledge to enhance OOD detection performance through selective generation from LLMs. Specifically, we introduce a consistency-based uncertainty calibration method to estimate the confidence score of each generation. We further extract visual objects from each image to fully capitalize on the aforementioned world knowledge. Extensive experiments demonstrate that our method consistently outperforms the state-of-the-art.
CITATION STYLE
Dai, Y., Lang, H., Zeng, K., Huang, F., & Li, Y. (2023). Exploring Large Language Models for Multi-Modal Out-of-Distribution Detection. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 5292–5305). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.351
Mendeley helps you to discover research relevant for your work.