Abstract
The idea that internal models of the world might be useful has generally been rejected by embodied AI for the same reasons that led to its rejection by behaviour based robotics. This paper re-examines the issue from historical, biological, and functional perspectives; the view that emerges indicates that internal models are essential for achieving cognition, that their use is widespread in biological systems, and that there are several good but neglected examples of their use within embodied AI. Consideration of the example of a hypothetical autonomous embodied agent that has to execute a complex mission in a dynamic, partially unknown, and hostile environment leads to the conclusion that the necessary cognitive architecture is likely to contain separate but interacting models of the body and of the world. This arrangement is shown to have intriguing parallels with new findings on the infrastructure of consciousness, leading to the speculation that the reintroduction of internal models into embodied AI may lead not only to improved machine cognition but also, in the long run, to machine consciousness. © Springer-Verlag Berlin Heidelberg 2004.
Cite
CITATION STYLE
Holland, O. (2004). The future of embodied artificial intelligence: Machine consciousness? In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3139, pp. 37–53). Springer Verlag. https://doi.org/10.1007/978-3-540-27833-7_3
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.