Embracing inference as action: A step towards human-level reasoning

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human-level AI involves the ability to reason about the beliefs of other agents, even when those other agents have reasoning styles that may be very different than the AI’s. The ability to carry out reasonable inferences in such situations, as well as in situations where an agent must reason about the beliefs of another agent’s beliefs about yet another agent, is under-studied. We show how such reasoning can be carried out in a new variant of the cognitive event calculus we call CECAC, by introducing several new powerful features for automated reasoning: First, the implementation of classical logic at the “system-level” and nonclassical logics at the “belief-level”; Second, CECAC treats all inferences made by agents as actions. This opens the door for two more additional features: epistemic boxes, which are a sort of frame in which the reasoning of an individual agent can be simulated, and evaluated codelets, which allow our reasoner to carry out operations beyond the limits of many current systems. We explain how these features are achieved and implemented in the MATR reasoning system, and discuss their consequences.

Cite

CITATION STYLE

APA

Licato, J., & Fowler, M. (2016). Embracing inference as action: A step towards human-level reasoning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9782, pp. 192–201). Springer Verlag. https://doi.org/10.1007/978-3-319-41649-6_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free