What is normalization? The strategies employed in top-down and bottom-up proteome analysis workflows

36Citations
Citations of this article
172Readers
Mendeley users who have this article in their library.

Abstract

The accurate quantification of changes in the abundance of proteins is one of the main applications of proteomics. The maintenance of accuracy can be affected by bias and error that can occur at many points in the experimental process, and normalization strategies are crucial to attempt to overcome this bias and return the sample to its regular biological condition, or normal state. Much work has been published on performing normalization on data post-acquisition with many algorithms and statistical processes available. However, there are many other sources of bias that can occur during experimental design and sample handling that are currently unaddressed. This article aims to cast light on the potential sources of bias and where normalization could be applied to return the sample to its normal state. Throughout we suggest solutions where possible but, in some cases, solutions are not available. Thus, we see this article as a starting point for discussion of the definition of and the issues surrounding the concept of normalization as it applies to the proteomic analysis of biological samples. Specifically, we discuss a wide range of different normalization techniques that can occur at each stage of the sample preparation and analysis process.

Cite

CITATION STYLE

APA

O’rourke, M. B., Town, S. E. L., Dalla, P. V., Bicknell, F., Belic, N. K., Violi, J. P., … Padula, M. P. (2019, September 1). What is normalization? The strategies employed in top-down and bottom-up proteome analysis workflows. Proteomes. MDPI AG. https://doi.org/10.3390/proteomes7030029

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free