The Canny algorithm is a well known edge detector that is widely used in the previous processing stages in several algorithms related to computer vision. An alternative, the LIP-Canny algorithm, is based on a robust mathematical model closer to the human vision system, obtaining better results in terms of edge detection. In this work we describe LIP-Canny algorithm under the perspective from its parallelization and optimization by using the NVIDIA CUDA framework. Furthermore, we present comparative results between an implementation of this algorithm using NVIDIA CUDA and the analogue using a C/C++ approach. © 2010 Springer-Verlag.
CITATION STYLE
Palomar, R., Palomares, J. M., Castillo, J. M., Olivares, J., & Gómez-Luna, J. (2010). Parallelizing and optimizing LIP-Canny using NVIDIA CUDA. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6098 LNAI, pp. 389–398). https://doi.org/10.1007/978-3-642-13033-5_40
Mendeley helps you to discover research relevant for your work.