Abstract
In many real-world machine learning applications, samples belong to a set of domains e.g., for product reviews each review belongs to a product category. In this paper, we study multi-domain imbalanced learning (MIL), the scenario that there is imbalance not only in classes but also in domains. In the MIL setting, different domains exhibit different patterns and there is a varying degree of similarity and divergence among domains posing opportunities and challenges for transfer learning especially when faced with limited or insufficient training data. We propose a novel domain-aware contrastive knowledge transfer method called DCMI to (1) identify the shared domain knowledge to encourage positive transfer among similar domains (in particular from head domains to tail domains); (2) isolate the domain-specific knowledge to minimize the negative transfer from dissimilar domains. We evaluated the performance of DCMI on three different datasets showing significant improvements in different MIL scenarios.
Cite
CITATION STYLE
Ke, Z., Kachuee, M., & Lee, S. (2022). Domain-Aware Contrastive Knowledge Transfer for Multi-domain Imbalanced Data. In WASSA 2022 - 12th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, Proceedings of the Workshop (pp. 25–36). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.wassa-1.3
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.