Learning Disentangled Representations of Negation and Uncertainty

10Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.

Abstract

Negation and uncertainty modeling are longstanding tasks in natural language processing. Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify. However, previous works on representation learning do not explicitly model this independence. We therefore attempt to disentangle the representations of negation, uncertainty, and content using a Variational Autoencoder. We find that simply supervising the latent representations results in good disentanglement, but auxiliary objectives based on adversarial learning and mutual information minimization can provide additional disentanglement gains.

Cite

CITATION STYLE

APA

Vasilakes, J., Zerva, C., Miwa, M., & Ananiadou, S. (2022). Learning Disentangled Representations of Negation and Uncertainty. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 8380–8397). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.574

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free