A scalable approach for LRT computation in GPGPU environments

4Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we propose new algorithmic techniques for massively data parallel computation of the Likelihood Ratio Test (LRT) on a large spatial data grid. LRT is the state-of-the-art method for identifying hotspots or anomalous regions in spatially referenced data. LRT is highly adaptable permitting the use of a large class of statistical distributions to model the data. However, standard sequential implementations of LRT may take several days on modern machines to identify anomalous regions even for moderately sized spatial grids. This work claims three novel contributions. First, we devise a dynamic program with a pre-processing step of O(n2) that allows us to compute the statistic for any given region in O(1), where n is the length of the grid. Second, we propose a scheme to accelerate the likelihood computation of a complement region using a bounding technique. Third, we provide a parallelization strategy for the LRT computation on GPGPUs. In concert all three contributions result in a speed up of nearly four hundred times reducing the LRT computation time of large spatial grids from several days to minutes. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Pang, L. X., Chawla, S., Scholz, B., & Wilcox, G. (2013). A scalable approach for LRT computation in GPGPU environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7808 LNCS, pp. 595–608). https://doi.org/10.1007/978-3-642-37401-2_58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free