Design and implementation of an accelerated Gabor filter bank using parallel hardware

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In computer vision, images are often preprocessed by the so-called Gabor transform. Using a Gabor filter bank, an image can be decomposed into orientational components lying in a specified frequency range. This biologically motivated decomposition simplifies higher level image processing like extraction of contours or pattern recognition. However, the IEEE floating-point implementation of this filter is too slow for real-time image-processing, especially if mobile applications with limited resources are targeted. This paper describes how this can be overcome by a hardware-implementation of the filter algorithm. The actual implementation is preceded by an analysis of the algorithm analyzing the effects of reduced-accuracy calculus and the possibility of parallelizing the process. The target device is a Xilinx Virtex FPGA which resides on a PCI rapid-prototyping board. © Springer-Verlag Berlin Heidelberg 2001.

Cite

CITATION STYLE

APA

Voß, N., & Mertsching, B. (2001). Design and implementation of an accelerated Gabor filter bank using parallel hardware. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2147, 451–460. https://doi.org/10.1007/3-540-44687-7_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free