Statistical perspectives on “big data”

46Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As our information infrastructure evolves, our ability to store, extract, and analyze data is rapidly changing. Big data is a popular term that is used to describe the large, diverse, complex and/or longitudinal datasets generated from a variety of instruments, sensors and/or computer-based transactions. The term big data refers not only to the size or volume of data, but also to the variety of data and the velocity or speed of data accrual. As the volume, variety, and velocity of data increase, our existing analytical methodologies are stretched to new limits. These changes pose new opportunitiesfor researchers in statistical methodology,including those interested in surveillance and statistical process control methods. Although it is well documented that harnessing big data to make better decisions can serve as a basis for innovative solutions in industry, healthcare, and science, these solutions can be found more easily with sound statistical methodologies. In this paper, we discuss several big data applications to highlight the opportunities and challenges for applied statisticians interested in surveillance and statistical process control. Our goal is to bring the research issues into better focus and encourage methodological developments for big data analysis in these areas.

Cite

CITATION STYLE

APA

Megahed, F. M., & Jones-Farmer, L. A. (2015). Statistical perspectives on “big data.” In Frontiers in Statistical Quality Control 10 (Vol. 11, pp. 29–47). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-319-12355-4_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free