Practical reasoning is such an essential cornerstone of artificial intelligence that it is impossible to see how autonomous agents can be realized without it. As a first step of practical reasoning, an autonomous agent is required to form its intentions by choosing amongst its motivations in light of its beliefs. An autonomous agent is also expected to seamlessly revise its intentions whenever its beliefs or motivations change. In the modern world, it becomes an impelling priority to endow agents with explainable practical reasoning capabilities in order to foster the trustworthiness of artificial agents. An adequate framework of practical reasoning must be able to (i) capture the process of intention formation, (ii) model the joint revision of beliefs and intentions, and (iii) provide explanations for the chosen beliefs and intentions. Despite the abundance of approaches in the literature for modelling practical reasoning, such approaches fail to possess at least one of the previously mentioned capabilities. In this paper, we present formal algebraic semantics for a logical language that can be used for practical reasoning. We demonstrate how our language possesses all of the aforementioned capabilities providing an adequate framework for explainable practical reasoning.
CITATION STYLE
Ehab, N., & Ismail, H. O. (2021). Towards Explainable Practical Agency: A Logical Perspective. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12688 LNAI, pp. 260–279). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-82017-6_16
Mendeley helps you to discover research relevant for your work.