FPGA implementation of a vision-based motion estimation algorithm for an underwater robot

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an FPGA implementation for real-time motion estimation of an underwater robot using computer vision. The algorithm searches for correspondences of a given number of interest points for every image acquired by the camera and some previous reference images. In order to minimise the lighting problems, normalised correlation is used as similarity measurement to match corresponding points in different images. The complexity of normalised correlation criteria determined two main parts in our hardware implementation: an array of Processing Elements (PE) and Post Processing Element (PPE).

Cite

CITATION STYLE

APA

Ila, V., Garcia, R., Charot, F., & Batlle, J. (2004). FPGA implementation of a vision-based motion estimation algorithm for an underwater robot. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3203, pp. 1152–1154). Springer Verlag. https://doi.org/10.1007/978-3-540-30117-2_153

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free