Explaining Everything

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Oxford physicist David Deutsch recently claimed that AI researchers had made no progress towards creating truly intelligent agents and were unlikely to do so until they began making machines that could produce creative explanations for themselves. Deutsch argued that AI must be possible because of the Universality of Computation, but that progress towards it would require nothing less than a new philosophical direction: a rejection of inductivism in favour of fallibilism. This paper sets out to review and respond to these claims. After first establishing a broad framework and terminology with which to discuss these questions, it examines the inductivist and fallibilist philosophies. It argues that Deutsch is right about fallibilism, not only because of the need for creative explanations but also because it makes it easier for agents to create and maintain models—a crucial ability for any sophisticated agent. However, his claim that AI research has made no progress is debatable, if not mistaken. The paper concludes with suggestions for ways in which agents might come up with truly creative explanations and looks briefly at the meaning of knowledge and truth in a fallibilist world.

Cite

CITATION STYLE

APA

Davenport, D. (2016). Explaining Everything. In Synthese Library (Vol. 376, pp. 341–354). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-319-26485-1_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free