GPU-based parallelization algorithm for 2D line integral convolution

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

GPU (Graphics Processing Unit) technology provides an efficient method for parallel computation. This paper will present a GPU-based Line Integral Convolution (LIC) parallel algorithm for visualization of discrete vector fields to accelerate LIC algorithm. The algorithm is implemented with parallel operations using Compute Unified Device Architecture (CUDA) programming model in GPU. The method can provide up to about 50× speed-up without any sacrifice on solution quality, compared to conventional sequential computation. Experiment results show that it is useful for in-time remote visualization of discrete vector fields. © 2010 Springer-Verlag.

Author supplied keywords

Cite

CITATION STYLE

APA

Qin, B., Wu, Z., Su, F., & Pang, T. (2010). GPU-based parallelization algorithm for 2D line integral convolution. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6145 LNCS, pp. 397–404). https://doi.org/10.1007/978-3-642-13495-1_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free