Domain-Specific Word Embeddings with Structure Prediction

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, for example, across time or domain. Current methods do not offer a way to use or predict information on structure between sub-corpora, time or domain and dynamic embeddings can only be compared after postalignment. We propose novel word embedding methods that provide general word representations for the whole corpus, domainspecific representations for each sub-corpus, sub-corpus structure, and embedding alignment simultaneously. We present an empirical evaluation on New York Times articles and two English Wikipedia datasets with articles on science and philosophy. Our method, called Word2Vec with Structure Prediction (W2VPred), provides better performance than baselines in terms of the general analogy tests, domain-specific analogy tests, and multiple specific word embedding evaluations as well as structure prediction performance when no structure is given a priori. As a use case in the field of Digital Humanities we demonstrate how to raise novel research questions for high literature from the German Text Archive.

Cite

CITATION STYLE

APA

Lassner, D., Brandl, S., Baillot, A., & Nakajima, S. (2023). Domain-Specific Word Embeddings with Structure Prediction. Transactions of the Association for Computational Linguistics, 11, 320–335. https://doi.org/10.1162/tacl_a_00538

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free