Abstraction, mimesis and the evolution of deep learning

0Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep learning developers typically rely on deep learning software frameworks (DLSFs)—simply described as pre-packaged libraries of programming components that provide high-level access to deep learning functionality. New DLSFs progressively encapsulate mathematical, statistical and computational complexity. Such higher levels of abstraction subsequently make it easier for deep learning methodology to spread through mimesis (i.e., imitation of models perceived as successful). In this study, we quantify this increase in abstraction and discuss its implications. Analyzing publicly available code from Github, we found that the introduction of DLSFs correlates both with significant increases in the number of deep learning projects and substantial reductions in the number of lines of code used. We subsequently discuss and argue the importance of abstraction in deep learning with respect to ephemeralization, technological advancement, democratization, adopting timely levels of abstraction, the emergence of mimetic deadlocks, issues related to the use of black box methods including privacy and fairness, and the concentration of technological power. Finally, we also discuss abstraction as a symptom of an ongoing technological metatransition.

Cite

CITATION STYLE

APA

Eklöf, J., Hamelryck, T., Last, C., Grima, A., & Snis, U. L. (2024). Abstraction, mimesis and the evolution of deep learning. AI and Society, 39(5), 2349–2357. https://doi.org/10.1007/s00146-023-01688-z

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free