Signature recognition: Human performance analysis vs. automatic system and feature extraction via crowdsourcing

5Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents discriminative features as a result of comparing the authenticity of signatures, between standardized responses from a group of people with no experience in signature recognition through a manual system based on crowdsourcing, as well as the performance of the human vs. an automatic system with two classifiers. For which an experimental protocol is implemented through interfaces programmed in HTML and published on the platform Amazon Mechanical Turk. This platform allows obtaining responses from 500 workers on the veracity of signatures shown to them. By normalizing the responses, several general features which serve for the extraction of discriminative features are obtained in signature recognition. The comparison analysis in terms of False Acceptance Rate and False Rejection Rate founds the presented features, which will serve as a future study of performance analysis in the implementation of automatic and semiautomatic signature recognition systems that will support financial, legal and security applications.

Cite

CITATION STYLE

APA

Morocho, D., Proaño, M., Alulema, D., Morales, A., & Fierrez, J. (2016). Signature recognition: Human performance analysis vs. automatic system and feature extraction via crowdsourcing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9703, pp. 324–334). Springer Verlag. https://doi.org/10.1007/978-3-319-39393-3_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free