GPU (Graphics Processing Unit) technology provides an efficient method for parallel computation. This paper will present a GPU-based Line Integral Convolution (LIC) parallel algorithm for visualization of discrete vector fields to accelerate LIC algorithm. The algorithm is implemented with parallel operations using Compute Unified Device Architecture (CUDA) programming model in GPU. The method can provide up to about 50× speed-up without any sacrifice on solution quality, compared to conventional sequential computation. Experiment results show that it is useful for in-time remote visualization of discrete vector fields. © 2010 Springer-Verlag.
CITATION STYLE
Qin, B., Wu, Z., Su, F., & Pang, T. (2010). GPU-based parallelization algorithm for 2D line integral convolution. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6145 LNCS, pp. 397–404). https://doi.org/10.1007/978-3-642-13495-1_49
Mendeley helps you to discover research relevant for your work.