This work tackles the important task of understanding out-of-distribution behavior in two prominent types of generative models, i.e., GANs and Diffusion models. Understanding this behavior is crucial in understanding their broader utility and risks as these systems are increasingly deployed in our daily lives. Our first contribution is demonstrating that diffusion spaces outperform GANs’ latent spaces in inverting high-quality OOD images. We also provide a theoretical analysis attributing this to the lack of prior holes in diffusion spaces. Our second significant contribution is to provide a theoretical hypothesis that diffusion spaces can be projected onto a bounded hypersphere, enabling image manipulation through geodesic traversal between inverted images. Our analysis shows that different geodesics share common attributes for the same manipulation, which we leverage to perform various image manipulations. We conduct thorough empirical evaluations to support and validate our claims. Finally, our third and final contribution introduces a novel approach to the few-shot sampling for out-of-distribution data by inverting a few images to sample from the cluster formed by the inverted latents. The proposed technique achieves state-of-the-art results for the few-shot generation task in terms of image quality. Our research underscores the promise of diffusion spaces in out-of-distribution imaging and offers avenues for further exploration.
CITATION STYLE
Ramachandran, S. N., Mukhopadhyay, R., Agarwal, M., Jawahar, C. V., & Namboodiri, V. (2024). Understanding the Generalization of Pretrained Diffusion Models on Out-of-Distribution Data. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 14767–14775). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i13.29395
Mendeley helps you to discover research relevant for your work.