Moving target tracking based on pulse coupled neural network and optical flow

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Video contains a large number of motion information. The video-particularly video with moving camera - is segmented based on the relative motion occurring between moving targets and background. By using fusion ability of pulse coupled neural network (PCNN), the target regions and the background regions are fused respectively. Firstly using PCNN fuses the direction of the optical flow fusing, and extracts moving targets from video especially with moving camera. Meanwhile, using phase spectrums of topological property and color pairs (red/green, blue/yellow) generates attention information. Secondly, our video attention map is obtained by means of linear fusing the above features (direction fusion, phase spectrums and magnitude of velocity), which adds weight for each information channel. Experimental results shows that proposed method has better target tracking ability compared with three other methods- Frequency-tuned salient region detection (FT) [5], visual background extractor (Vibe) [6] and phase spectrum of quaternion Fourier transform (PQFT) [1].

Cite

CITATION STYLE

APA

Ni, Q., Wang, J., & Gu, X. (2015). Moving target tracking based on pulse coupled neural network and optical flow. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9491, pp. 17–25). Springer Verlag. https://doi.org/10.1007/978-3-319-26555-1_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free