Parallelizing and optimizing LIP-Canny using NVIDIA CUDA

11Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Canny algorithm is a well known edge detector that is widely used in the previous processing stages in several algorithms related to computer vision. An alternative, the LIP-Canny algorithm, is based on a robust mathematical model closer to the human vision system, obtaining better results in terms of edge detection. In this work we describe LIP-Canny algorithm under the perspective from its parallelization and optimization by using the NVIDIA CUDA framework. Furthermore, we present comparative results between an implementation of this algorithm using NVIDIA CUDA and the analogue using a C/C++ approach. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Palomar, R., Palomares, J. M., Castillo, J. M., Olivares, J., & Gómez-Luna, J. (2010). Parallelizing and optimizing LIP-Canny using NVIDIA CUDA. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6098 LNAI, pp. 389–398). https://doi.org/10.1007/978-3-642-13033-5_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free