Where-what network with CUDA: General object recognition and location in complex backgrounds

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An effective framework for general object recognition and localization from complex backgrounds had not been found till the brain-inspired Where-What Network (WWN) series by Weng and coworkers. This paper reports two advances along this line. One is the automatic adaptation of the receptive field of each neuron to disregard input dimensions that arise from backgrounds but without a handcrafted object model, since the initial hexagonal receptive field does not fit well the contour of the automatically assigned object view. The other is the hierarchical parallelization technique and its implementation on the GPU-based accelerator using the CUDA parallel language. The experimental results showed that automatic adaptation of the receptive fields led to improvements in the recognition rate. The hierarchical parallelization technique has achieved a speedup of 16 times compared to the C program. This speed-up was employed on the Haibao Robot displayed at the World Expo, Shanghai 2010. © 2011 Springer-Verlag.

Author supplied keywords

Cite

CITATION STYLE

APA

Wang, Y., Wu, X., Song, X., Zhang, W., & Weng, J. (2011). Where-what network with CUDA: General object recognition and location in complex backgrounds. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6676 LNCS, pp. 331–341). https://doi.org/10.1007/978-3-642-21090-7_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free