Multimodal Personality Recognition using Cross-attention Transformer and Behaviour Encoding

6Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Personality computing and affective computing have gained recent interest in many research areas. The datasets for the task generally have multiple modalities like video, audio, language and bio-signals. In this paper, we propose a flexible model for the task which exploits all available data. The task involves complex relations and to avoid using a large model for video processing specifically, we propose the use of behaviour encoding which boosts performance with minimal change to the model. Cross-attention using transformers has become popular in recent times and is utilised for fusion of different modalities. Since long term relations may exist, breaking the input into chunks is not desirable, thus the proposed model processes the entire input together. Our experiments show the importance of each of the above contributions.

Cite

CITATION STYLE

APA

Agrawal, T., Agarwal, D., Balazia, M., Sinha, N., & Bremond, F. (2022). Multimodal Personality Recognition using Cross-attention Transformer and Behaviour Encoding. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 5, pp. 501–508). Science and Technology Publications, Lda. https://doi.org/10.5220/0010841400003124

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free