Visual Percepts Quality Recognition Using Convolutional Neural Networks

N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In visual recognition systems, it is necessary to identify between good or bad quality images. Visual perceptions are discrete representation of observable objects. In typical systems, visual parameters are adjusted for optimal detection of good quality images. However, over a wide range of visual context scenarios, these parameters are usually not optimized. This study focused on the learning and detection of good and bad percepts from a given visual context using a convolutional neural network. The system utilized a perception-action model with memory and learning mechanism which is trained and validated in four different road traffic locations (DS0, DS3-1, DS4-1, DS4-3). The training accuracy for DS0, DS3-1, DS4-1, and DS4-3 are 93.53%, 91.16%, 93.39%, and 95.76%, respectively. The validation accuracy for DS0, DS3-1, DS4-1, and DS4-3 are 88.73%, 77.40%, 95.21%, and 83.56%, respectively. Based from these results, the system can adequately learn to differentiate between good or bad quality percepts.

Cite

CITATION STYLE

APA

Billones, R. K. C., Bandala, A. A., Gan Lim, L. A., Sybingco, E., Fillone, A. M., & Dadios, E. P. (2020). Visual Percepts Quality Recognition Using Convolutional Neural Networks. In Advances in Intelligent Systems and Computing (Vol. 944, pp. 652–665). Springer Verlag. https://doi.org/10.1007/978-3-030-17798-0_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free