Characterizing and assessing human-like behavior in cognitive architectures

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Turing Test is usually seen as the ultimate goal of Strong Artificial Intelligence (Strong AI). Mainly because of two reasons: first, it is assumed that if we can build a machine that is indistinguishable from a human is because we have completely discovered how a human mind is created; second, such an intelligent machine could replace or collaborate with humans in any imaginable complex task. Furthermore, if such a machine existed it would probably surpass humans in many complex tasks (both physically and cognitively). But do we really need such a machine? Is it possible to build such a system in the short-term? Do we have to settle for the now classical narrow AI approaches? Isn't there a more reasonable medium term challenge that AI community should aim at? In this paper, we use the paradigmatic Turing test to discuss the implications of aiming too high in the AI research arena; we analyze key factors involved in the design and implementation of variants of the Turing test and we also propose a medium term plausible agenda towards the effective development of Artificial General Intelligence (AGI) from the point of view of artificial cognitive architectures. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Arrabales, R., Ledezma, A., & Sanchis, A. (2013). Characterizing and assessing human-like behavior in cognitive architectures. In Advances in Intelligent Systems and Computing (Vol. 196 AISC, pp. 7–15). Springer Verlag. https://doi.org/10.1007/978-3-642-34274-5_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free