Facial action unit detection via hybrid relational reasoning

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Correlations in facial action units (AUs) convey significant information for AU detection yet have not been thoroughly exploited. Most existing methods learn the regional correlation distribution of each AU, or reason the dependencies among AUs. However, these methods typically either predefine the correlations based on prior knowledge, which often ignores useful information, or directly learn the correlations guided by AU detection, which often includes irrelevant information. To resolve these limitations, we propose a novel hybrid relational reasoning framework for AU detection. In particular, we propose to adaptively reason pixel-level correlations of each AU, under the constraint of predefined regional correlations by facial landmarks, as well as the supervision of AU detection. Moreover, we propose to adaptively reason AU-level correlations using a graph convolutional network, by considering both predefined AU relationships and learnable relationship weights. Our framework is beneficial for integrating the advantages of correlation predefinition and correlation learning. Extensive experiments demonstrate that our approach (i) soundly outperforms the state-of-the-art AU detection methods on the challenging BP4D, DISFA, and GFT benchmarks, and (ii) can precisely reason the regional correlation distribution of each AU.

Cite

CITATION STYLE

APA

Shao, Z., Zhou, Y., Liu, B., Zhu, H., Du, W. L., & Zhao, J. (2022). Facial action unit detection via hybrid relational reasoning. Visual Computer, 38(9–10), 3045–3057. https://doi.org/10.1007/s00371-022-02527-w

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free