Word embeddings and its application in deep learning

14Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word embedding in simple term can be defined as representing text in form of vectors. Vector representations of text help people in finding similarities, because contextual words that seem to appear nearby regularly use to appear in close proximity in vector space. The motivating factor behind such numerical representation of text corpus is that it can be manipulated arithmetically just like any other vector. Deep learning along with neural network is not new at all, both the concepts are prevalent around the decades but there was a major tailback of unavailability and accessibility of computation power. Deep learning is now effectively being used in Natural Language Processing with the improvement in techniques like word embedding, mobile enablement and focus on attention. The paper will discuss about the two popular model of word embedding (Word2Vec model) can be used for deep learning and will also compare them. The implementation steps of Skip gram model are also discusses in the paper. The paper will also discuss challenging issues for Word2Vce model.

Cite

CITATION STYLE

APA

Verma, P., & Khandelwal, B. (2019). Word embeddings and its application in deep learning. International Journal of Innovative Technology and Exploring Engineering, 8(11), 337–341. https://doi.org/10.35940/ijitee.K1343.0981119

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free