BERT which is published by Google AI language stands for Bidirectional Encoder Representations from Transformers is a language representation model recently developed has caused a flurry in Machine learning community. The design of BERT consists of pre-training the unlabeled text contextually from both the directions left and right for all the layers, unlike the recent language representation models. For a BERT model which is pre-trained, inclusion of one output layer to the state-of-the-art models for fine tuning is done which facilitates different tasks for language understanding such as opinion mining and question answering so that the task specific architectures are not to be modified. BERT is a key technical innovation which applies bidirectional training of a popular attention model called as Transformer language modelling. The results in the paper represent a language model BERT in which training is done bi-directionally and a deeper sense of context for the language understanding is developed. In the paper, a detailed description of a novel technique of BERT in which training involves Masked Language Model (MLM) and Next Sentence Prediction (NSP) bidirectionally which was previously impossible. {[}Jacob Devlin, 2019]
CITATION STYLE
Buche, A. (2020). BERT for Opinion Mining and Sentiment Farming. Bioscience Biotechnology Research Communications, 13(14), 35–39. https://doi.org/10.21786/bbrc/13.14/9
Mendeley helps you to discover research relevant for your work.