Multimodal Emotion and Sentiment Modeling from Unstructured Big Data: Challenges, Architecture, Techniques

29Citations
Citations of this article
77Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The exponential growth of multimodal content in today's competitive business environment leads to a huge volume of unstructured data. Unstructured big data has no particular format or structure and can be in any form, such as text, audio, images, and video. In this paper, we address the challenges of emotion and sentiment modeling due to unstructured big data with different modalities. We first include an up-to-date review on emotion and sentiment modeling including the state-of-the-art techniques. We then propose a new architecture of multimodal emotion and sentiment modeling for big data. The proposed architecture consists of five essential modules: data collection module, multimodal data aggregation module, multimodal data feature extraction module, fusion and decision module, and application module. Novel feature extraction techniques called the divide-and-conquer principal component analysis (Div-ConPCA) and the divide-and-conquer linear discriminant analysis (Div-ConLDA) are proposed for the multimodal data feature extraction module in the architecture. The experiments on a multicore machine architecture are performed to validate the performance of the proposed techniques.

Cite

CITATION STYLE

APA

Seng, J. K. P., & Ang, K. L. M. (2019). Multimodal Emotion and Sentiment Modeling from Unstructured Big Data: Challenges, Architecture, Techniques. IEEE Access, 7, 90982–90998. https://doi.org/10.1109/ACCESS.2019.2926751

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free