Hydra: A scalable proteomic search engine which utilizes the Hadoop distributed computing framework

48Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Background: For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed.Results: We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed.Conclusion: The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. © 2012 Lewis et al.; licensee BioMed Central Ltd.

Cite

CITATION STYLE

APA

Lewis, S., Csordas, A., Killcoyne, S., Hermjakob, H., Hoopmann, M. R., Moritz, R. L., … Boyle, J. (2012). Hydra: A scalable proteomic search engine which utilizes the Hadoop distributed computing framework. BMC Bioinformatics, 13(1). https://doi.org/10.1186/1471-2105-13-324

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free