Abstract
Training data for sentiment analysis are abundant in multiple domains, yet scarce for other domains. It is useful to leveraging data available for all existing domains to enhance performance on different domains.We investigate this problem by learning domain-specific representations of input sentences using neural network. In particular, a descriptor vector is learned for representing each domain, which is used to map adversarially trained domaingeneral Bi-LSTM input representations into domain-specific representations. Based on this model, we further expand the input representation with exemplary domain knowledge, collected by attending over a memory network of domain training data. Results show that our model outperforms existing methods on multidomain sentiment analysis significantly, giving the best accuracies on two different benchmarks.
Cite
CITATION STYLE
Liu, Q., Zhang, Y., & Liu, J. (2018). Learning domain representation for multi-domain sentiment classification. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 1, pp. 541–550). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-1050
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.