Deep feature-based three-stage detection of banknotes and coins for assisting visually impaired people

25Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Owing to the rapid advancements in smartphone technology, there is an emerging need for a technology that can detect banknotes and coins to assist visually impaired people using the cameras embedded in smartphones. Previous studies have mostly used handcrafted feature-based methods, such as scaleinvariant feature transform or speeded-up robust features, which cannot produce robust detection results for banknotes or coins captured in various backgrounds and environments. With the recent advancement in deep learning technology, some studies have been conducted on banknote and coin detection using a deep convolutional neural network (CNN). However, these studies also showed degraded performance depending on the changes in background and environment. To overcome these drawbacks, this paper proposes a threestage detection technology for new banknotes and coins by applying faster region-based CNN, geometric constraints, and the residual network (ResNet). In the experiment performed using the open database of Jordanian dinar (JOD) and 6,400 images of eight types of Korean won banknotes and coins obtained using our smartphones, the proposed method exhibited a better detection performance than the state-of-the-art methods based on handcrafted features and deep features.

Cite

CITATION STYLE

APA

Park, C., Cho, S. W., Baek, N. R., Choi, J., & Park, K. R. (2020). Deep feature-based three-stage detection of banknotes and coins for assisting visually impaired people. IEEE Access, 8, 184598–184613. https://doi.org/10.1109/ACCESS.2020.3029526

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free