Transformers in Machine Learning: Literature Review

  • T T
  • Haryono W
  • Zailani A
  • et al.
N/ACitations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

In this study, the researcher presents an approach regarding methods in Transformer Machine Learning. Initially, transformers are neural network architectures that are considered as inputs. Transformers are widely used in various studies with various objects. The transformer is one of the deep learning architectures that can be modified. Transformers are also mechanisms that study contextual relationships between words. Transformers are used for text compression in readings. Transformers are used to recognize chemical images with an accuracy rate of 96%. Transformers are used to detect a person's emotions. Transformer to detect emotions in social media conversations, for example, on Facebook with happy, sad, and angry categories. Figure 1 illustrates the encoder and decoder process through the input process and produces output. the purpose of this study is to only review literature from various journals that discuss transformers. This explanation is also done by presenting the subject or dataset, data analysis method, year, and accuracy achieved. By using the methods presented, researchers can conclude results in search of the highest accuracy and opportunities for further research.

Cite

CITATION STYLE

APA

T, T., Haryono, W., Zailani, A. U., Djaksana, Y. M., Rosmawarni, N., & Arianti, N. D. (2023). Transformers in Machine Learning: Literature Review. Jurnal Penelitian Pendidikan IPA, 9(9), 604–610. https://doi.org/10.29303/jppipa.v9i9.5040

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free