High-throughput technologies can routinely assay biological or clinical samples and produce wide data sets where each sample is associated with tens of thousands of measurements. Such data sets can be mined to discover biomarkers and develop statistical models capable of predicting an endpoint of interest from data measured in the samples. The field of biomarker model development combines methods from statistics and machine learning to develop and evaluate predictive biomarker models. In this chapter, we discuss the computational steps involved in the development of biomarker models designed to predict information about individual samples and review approaches often used to implement each step. A practical example of biomarker model development in a large gene expression data set is presented. This example leverages BDVal, a suite of biomarker model development programs developed as an open-source project (see http://bdval.org /).
CITATION STYLE
Deng, X., & Campagne, F. (2010). Introduction to the development and validation of predictive biomarker models from high-throughput data sets. Methods in Molecular Biology (Clifton, N.J.), 620, 435–470. https://doi.org/10.1007/978-1-60761-580-4_15
Mendeley helps you to discover research relevant for your work.