Abstract
Word embedding in simple term can be defined as representing text in form of vectors. Vector representations of text help people in finding similarities, because contextual words that seem to appear nearby regularly use to appear in close proximity in vector space. The motivating factor behind such numerical representation of text corpus is that it can be manipulated arithmetically just like any other vector. Deep learning along with neural network is not new at all, both the concepts are prevalent around the decades but there was a major tailback of unavailability and accessibility of computation power. Deep learning is now effectively being used in Natural Language Processing with the improvement in techniques like word embedding, mobile enablement and focus on attention. The paper will discuss about the two popular model of word embedding (Word2Vec model) can be used for deep learning and will also compare them. The implementation steps of Skip gram model are also discusses in the paper. The paper will also discuss challenging issues for Word2Vce model.
Author supplied keywords
Cite
CITATION STYLE
Verma, P., & Khandelwal, B. (2019). Word embeddings and its application in deep learning. International Journal of Innovative Technology and Exploring Engineering, 8(11), 337–341. https://doi.org/10.35940/ijitee.K1343.0981119
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.