Explainable intelligent environments

4Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The main focus of an Intelligent environment, as with other applications of Artificial Intelligence, is generally on the provision of good decisions towards the management of the environment or the support of human decision-making processes. The quality of the system is often measured in terms of accuracy or other performance metrics, calculated on labeled data. Other equally important aspects are usually disregarded, such as the ability to produce an intelligible explanation for the user of the environment. That is, asides from proposing an action, prediction, or decision, the system should also propose an explanation that would allow the user to understand the rationale behind the output. This is becoming increasingly important in a time in which algorithms gain increasing importance in our lives and start to take decisions that significantly impact them. So much so that the EU recently regulated on the issue of a “right to explanation”. In this paper we propose a Human-centric intelligent environment that takes into consideration the domain of the problem and the mental model of the Human expert, to provide intelligible explanations that can improve the efficiency and quality of the decision-making processes.

Cite

CITATION STYLE

APA

Carneiro, D., Silva, F., Guimarães, M., Sousa, D., & Novais, P. (2021). Explainable intelligent environments. In Advances in Intelligent Systems and Computing (Vol. 1239 AISC, pp. 34–43). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58356-9_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free