Recent development of high throughput genome sequence technologies, such as next generation sequencer (NGS) and single nucleotide polymorphism (SNP) microarray, provides “flood” of human genome data. Large-scale human genetic studies incorporating more than 100,000 subjects have identified thousands of genetic variants with causal risk of human diseases. To handle such BIG data, sophisticated computational and statistical approaches are required. While construction of in silico pipelines to translate NGSraw reads to human genome variations is necessary, development of further strategies to interpret human genome data and understand disease biology elucidation and novel drug discovery is becoming more and more important. Statistical genetics is a research field that evaluates causality between human genetic and phenotypic variations, and considered as a promising tool to translationally connect human genome data with a variety of biological and medical resources. In this review, we highlight a basic theory, latest updates, and future directions of human genome data analysis with a series of introductory examples.
CITATION STYLE
Okada, Y. (2017). Statistical genetics and genome data analysis. Transactions of Japanese Society for Medical and Biological Engineering, 55(4), 165–172. https://doi.org/10.11239/jsmbe.55.165
Mendeley helps you to discover research relevant for your work.