Popper's falsification and corroboration from the statistical perspectives

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The role of probability appears unchallenged as the key measure of uncertainty, used among other things for practical induction in the empirical sciences. Yet, Popper was emphatic in his rejection of inductive probability and of the logical probability of hypotheses; furthermore, for him, the degree of corroboration cannot be a probability. Instead he proposed a deductive method of testing. In many ways this dialectic tension has many parallels in statistics, with the Bayesians on the logico-inductive side vs. the non-Bayesians or the frequentists on the other side. Simplistically Popper seems to be on the frequentist side, but recent synthesis on the non-Bayesian side might direct the Popperian views to a more nuanced destination. Logical probability seems perfectly suited to measure partial evidence or support, so what can we use if we are to reject it? For the past 100 years, statisticians have developed a related concept called likelihood. As a measure of corroboration, the likelihood satisfies the Popperian requirement that it is not a probability. Our aim is to introduce the likelihood and its recent extension via a discussion of two well-known logical fallacies in order to highlight that its lack of recognition may have led to unnecessary confusion in our discourse about falsification and corroboration of hypotheses.

Cite

CITATION STYLE

APA

Lee, Y., & Pawitan, Y. (2021). Popper’s falsification and corroboration from the statistical perspectives. In Karl Popper’s Science and Philosophy (pp. 121–147). Springer International Publishing. https://doi.org/10.1007/978-3-030-67036-8_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free