Ngs short read alignment algorithms and the role of big data and cloud computing

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Next Generation Sequencing (NGS) raises opportunities to the computational field for fast and accurate methods for the various challenges associated with NGS data. NGS technology generates a large set of short reads of size 50 to 400 base pairs as a result of biological experiments done on the samples taken from species. Such raw reads are not directly ready for doing most of the analysis or comparative studies to figure out medical related solutions. Hence, the reads have to be assembled to form a complete genome sequence. During the assembly process, there is a high chance of erroneous positioning. Some strategy has to be applied to correct such errors. Once the error-free sequence data is prepared, it is ready for further analysis. The analysis may assist in identifying disease and its cause, similarity check, genetic issue, etc. All of these processes involve data of huge size (in terms of millions per day). To improve the performance of the algorithms working on such vast amount of data, the latest technologies such as Big Data and Cloud Computing can be incorporated. Here, in this paper the evolution of the algorithms for NGS data alignment and the role of Big Data and Cloud Computing technologies are discussed.

Cite

CITATION STYLE

APA

Rexie, J. A. M., Raimond, K., Mythily, M., & Kethsy Prabavathy, A. (2019). Ngs short read alignment algorithms and the role of big data and cloud computing. International Journal of Innovative Technology and Exploring Engineering, 8(9), 967–971. https://doi.org/10.35940/ijitee.i8001.078919

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free