Abstract
Autonomous knowledge transfer from a known task to a new one requires discovering task similarities and knowledge generalization without the help of a designer or teacher. How transfer mechanisms in such learning may work is still an open question. Transfer of knowledge makes most sense for learners for whom novelty is regular (other things being equal), as in the physical world. When new information must be unified with existing knowledge over time, a cumulative learning mechanism is required, increasing the breadth, depth, and accuracy of an agent’s knowledge over time, as experience accumulates. Here we address the requirements for what we refer to as autonomous cumulative transfer learning (ACTL) in novel task-environments, including implementation and evaluation criteria, and how it relies on the process of similarity and ampliative reasoning. While the analysis here is theoretical, the fundamental principles of the cumulative learning mechanism in our theory have been implemented and evaluated in a running system described priorly. We present arguments for the theory from an empirical as well as analytical viewpoint.
Author supplied keywords
Cite
CITATION STYLE
Sheikhlar, A., Thórisson, K. R., & Eberding, L. M. (2020). Autonomous cumulative transfer learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12177 LNAI, pp. 306–316). Springer. https://doi.org/10.1007/978-3-030-52152-3_32
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.