Abstract
The words “indeterministic study” are used to designate research aiming to determine how frequently a quantity X characterizing the phenomena considered assumes its various particular values. If the purpose of research is to establish the exact value of X as a function of other variables, then this research is “deterministic.” In the history of indeterminism in science four (overlapping) periods are discernible. a. Period of “marginal indeterminism.” This was the period, symbolized by the names of Laplace and Gauss, in which research in science was all deterministic with just one domain, that of errors of measurement, treated indeterministically. b. Period of “static indeterminism,” roughly covering the end of the nineteenth and the beginning of the twentieth centuries, is symbolized by names of Bruns, Charlier, Edgeworth, Galton and Karl Pearson. Here, the main subject of study was a “population” and efforts were made to develop systems of frequency curves to describe analytically the empirical distributions. c. The third discernible period, roughly from 1920 to 1940, may be termed the period of “static indeterministic experimentation.” It is marked by the name of R. A. Fisher and by his book The design of experiments. The typical problems considered were: do these two populations have the same distributions of X? This and similar questions led to the development of basic ideas of tests of statistical hypotheses and of estimation, and also of the appropriate techniques. All of these are currently at the disposal and in constant use of an applied statistician. d. The fourth period in the history of indeterminism, currently in full swing, the period of “dynamic indeterminism,” is characterized by the search for evolutionary chance mechanisms capable of explaining the various frequencies observed in the development of the phenomena studied. The chance mechanism of carcinogenesis and the chance mechanism behind the varying properties of the comets in the Solar System exemplify the subjects of dynamic indeterministic studies. One might hazard the assertion that every serious contemporary study is a study of the chance mechanism behind some phenomena. The statistical and probabilistic tool in such studies is the theory of stochastic processes, now involving many unsolved problems. In order that the applied statistician be in a position to cooperate effectively with the modern experimental scientist, the theoretical equipment of the statistician must include familiarity and capability of dealing with stochastic processes. Copyright Taylor & Francis Group, LLC.
Cite
CITATION STYLE
Neyman, J. (1960). In Determinism in Science and New Demands on Statisticians. Journal of the American Statistical Association, 55(292), 625–639. https://doi.org/10.1080/01621459.1960.10483363
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.