GPU-based biclustering for neural information processing

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an efficient mapping of geometric biclustering (GBC) algorithm for neural information processing on Graphical Processing Unit (GPU). The proposed designs consist of five different versions which extensively study the use of memory components on the GPU board for mapping the GBC algorithm. GBC algorithm is used to find any maximal biclusters, which are common patterns in each column in the neural processing and gene microarray data. A microarray commonly involves a huge number of data, such as thousands of rows by thousands of columns so that finding the maximal biclusters involves intensive computation. The advantage of GPU is its ability of parallel computing which means that for those independent procedures, they can be carried out at the same time. Experimental results show that the GPU-based GBC could reduce the processing time largely due to the parallel computing of GPU, and its scalability. As an example, GBC algorithm involves a large number of AND operations which utilize the parallel GPU computations, that can be further practically used for other neural processing algorithms. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Lo, A. W. Y., Liu, B., & Cheung, R. C. C. (2012). GPU-based biclustering for neural information processing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7667 LNCS, pp. 134–141). https://doi.org/10.1007/978-3-642-34500-5_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free