Emotion Recognition using Convolutional Neural Network in Virtual Meeting Environment

  • Shabrina N
  • Omar J
  • Bhakti A
  • et al.
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

This study is done in order to propose an Emotion Recognition System that uses Convolutional Neural Network in a Virtual Meeting Environment to detect non-verbal feedback that emerge when communicating. This study starts with the training process of the CNN model with version 2.3.0 of tensorflow-gpu library, along with FER-2013 dataset, where only 80% of the data is used as the training set, and the other 20% is used as the test set. The model is trained for 430 epochs that results in 73.86% rate of accuracy with a loss of 1.42. In the classification process, a Haar-Cascade Classifier algorithm is used to detect faces within an image that has been inputted using OpenCV. Next the already developed model is used to predict the image that has been pre-processed. Based on the results shown, it can be concluded that the study has provided satisfactory results and is expected to help in understanding non-verbal input given when communicating and among other various things.

Cite

CITATION STYLE

APA

Shabrina, N. H., Omar, J., Bhakti, A. N., & Patria, A. (2021). Emotion Recognition using Convolutional Neural Network in Virtual Meeting Environment. Ultima Computing : Jurnal Sistem Komputer, 13(1), 30–38. https://doi.org/10.31937/sk.v13i1.2108

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free