From next-generation resequencing reads to a high-quality variant data set

56Citations
Citations of this article
263Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Sequencing has revolutionized biology by permitting the analysis of genomic variation at an unprecedented resolution. High-throughput sequencing is fast and inexpensive, making it accessible for a wide range of research topics. However, the produced data contain subtle but complex types of errors, biases and uncertainties that impose several statistical and computational challenges to the reliable detection of variants. To tap the full potential of high-throughput sequencing, a thorough understanding of the data produced as well as the available methodologies is required. Here, I review several commonly used methods for generating and processing next-generation resequencing data, discuss the influence of errors and biases together with their resulting implications for downstream analyses and provide general guidelines and recommendations for producing high-quality single-nucleotide polymorphism data sets from raw reads by highlighting several sophisticated reference-based methods representing the current state of the art.

Cite

CITATION STYLE

APA

Pfeifer, S. P. (2017, February 1). From next-generation resequencing reads to a high-quality variant data set. Heredity. Nature Publishing Group. https://doi.org/10.1038/hdy.2016.102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free