Reliability of Results and Fairness in the Comparison of Rates Among 3D Facial Expression Recognition Works

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The capability of replicating experiments and comparing results is a basic premise for scientific progress. Thus, it is imperative that the conduction of validation experiments follow transparent methodological steps and be also reported in a clear way to allow accurate replication and fair comparison between results. In 3D facial expression recognition, the presented results are estimates of performance of a classification system and, therefore, have an intrinsic degree of uncertainty. Because of that, the reliability of a measure for evaluation is directly related to the concept of stability. In this work, we examine the experimental setup reported by a set of 3D facial expression recognition studies published from 2013 to 2018. This investigation revealed that the concern with stability of mean recognition rates is present in only a small portion of studies. In addition, it demonstrates that the highest rates in this domain are also, potentially, the most unstable. Those findings lead to a reflection on the fairness of comparisons in this domain.

Cite

CITATION STYLE

APA

Alexandre, G. R., Thé, G. A. P., & Soares, J. M. (2019). Reliability of Results and Fairness in the Comparison of Rates Among 3D Facial Expression Recognition Works. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11678 LNCS, pp. 391–401). Springer Verlag. https://doi.org/10.1007/978-3-030-29888-3_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free