Adaptive semi-supervised learning for cross-domain sentiment classification

48Citations
Citations of this article
143Readers
Mendeley users who have this article in their library.

Abstract

We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain. Our approach explicitly minimizes the distance between the source and the target instances in an embedded feature space. With the difference between source and target minimized, we then exploit additional information from the target domain by consolidating the idea of semi-supervised learning, for which, we jointly employ two regularizations - entropy minimization and self-ensemble bootstrapping - to incorporate the unlabeled target data for classifier refinement. Our experimental results demonstrate that the proposed approach can better leverage unlabeled data from the target domain and achieve substantial improvements over baseline methods in various experimental settings.

References Powered by Scopus

GloVe: Global vectors for word representation

27195Citations
N/AReaders
Get full text

Convolutional neural networks for sentence classification

8126Citations
N/AReaders
Get full text

Image-based recommendations on styles and substitutes

1856Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Label Propagation with Augmented Anchors: A Simple Semi-supervised Learning Baseline for Unsupervised Domain Adaptation

56Citations
N/AReaders
Get full text

Deep transfer learning mechanism for fine-grained cross-domain sentiment classification

43Citations
N/AReaders
Get full text

Cross-Domain Review Generation for Aspect-Based Sentiment Analysis

40Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

He, R., Lee, W. S., Ng, H. T., & Dahlmeier, D. (2018). Adaptive semi-supervised learning for cross-domain sentiment classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 3467–3476). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1383

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 55

80%

Researcher 7

10%

Lecturer / Post doc 4

6%

Professor / Associate Prof. 3

4%

Readers' Discipline

Tooltip

Computer Science 72

85%

Engineering 5

6%

Linguistics 5

6%

Business, Management and Accounting 3

4%

Save time finding and organizing research with Mendeley

Sign up for free