Two-Way Neural Network Chinese-English Machine Translation Model Fused with Attention Mechanism

10Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This study uses an end-to-end encoder-decoder structure to build a machine translation model, allowing the machine to automatically learn features and transform the corpus data into distributed representations. The word vector uses a neural network to achieve direct mapping. Research on constructing neural machine translation models for different neural network structures. Based on the translation model of the LSTM network, the gate valve mechanism reduces the gradient attenuation and improves the ability to process long-distance sequences. Based on the GRU network structure, the simplified processing is performed on the basis of it, which reduces the training complexity and achieves good performance. Aiming at the problem that some source language sequences of any length in the encoder are encoded into fixed-dimensional background vectors, the attention mechanism is introduced to dynamically adjust the degree of influence of the source language context on the target language sequence, and the translation model's ability to deal with long-distance dependencies is improved. In order to better reflect the context information, this study further proposes a machine translation model based on two-way GRU and compares and analyzes multiple translation models to verify the effectiveness of the model's performance improvement.

Cite

CITATION STYLE

APA

Liang, J., & Du, M. (2022). Two-Way Neural Network Chinese-English Machine Translation Model Fused with Attention Mechanism. Scientific Programming, 2022. https://doi.org/10.1155/2022/1270700

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free