CNN-LSTM Facial Expression Recognition Method Fused with Two-Layer Attention Mechanism

12Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When exploring facial expression recognition methods, it is found that existing algorithms make insufficient use of information about the key parts that express emotion. For this problem, on the basis of a convolutional neural network and long short-term memory (CNN-LSTM), we propose a facial expression recognition method that incorporates an attention mechanism (CNN-ALSTM). Compared with the general CNN-LSTM algorithm, it can mine the information of important regions more effectively. Furthermore, a CNN-LSTM facial expression recognition method incorporating a two-layer attention mechanism (ACNN-ALSTM) is proposed. We conducted comparative experiments on Fer2013 and processed CK + datasets with CNN-ALSTM, ACNN-ALSTM, patch based ACNN (pACNN), Facial expression recognition with attention net (FERAtt), and other networks. The results show that the proposed ACNN-ALSTM hybrid neural network model is superior to related work in expression recognition.

Cite

CITATION STYLE

APA

Ming, Y., Qian, H., & Guangyuan, L. (2022). CNN-LSTM Facial Expression Recognition Method Fused with Two-Layer Attention Mechanism. Computational Intelligence and Neuroscience, 2022. https://doi.org/10.1155/2022/7450637

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free