This paper proposes a novel oversampling approach that strives to balance the class priors with a considerably imbalanced data distribution of high dimensionality. The crux of our approach lies in learning interpretable latent representations that can model the synthetic mechanism of the minority samples by using a generative adversarial network (GAN). A Bayesian regularizer is imposed to guide the GAN to extract a set of salient features that are either disentangled or intensionally entangled, with their interplay controlled by a prescribed structure, defined with human-in-the-loop. As such, our GAN enjoys an improved sample complexity, being able to synthesize high-quality minority samples even if the sizes of minority classes are extremely small during training. Empirical studies substantiate that our approach can empower simple classifiers to achieve superior imbalanced classification performance over the state-of-the-art competitors and is robust across various imbalance settings. Code is released in github.com/fudonglin/IMSIC.
CITATION STYLE
He, Y., Lin, F., Yuan, X., & Tzeng, N. F. (2021). Interpretable Minority Synthesis for Imbalanced Classification. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2542–2548). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/350
Mendeley helps you to discover research relevant for your work.