Modularity in Deep Learning: A Survey

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Modularity is a general principle present in many fields. It offers attractive advantages, including, among others, ease of conceptualization, interpretability, scalability, module combinability, and module reusability. The deep learning community has long sought to take inspiration from the modularity principle, either implicitly or explicitly. This interest has been increasing over recent years. We review the notion of modularity in deep learning around three axes: data, task, and model, which characterize the life cycle of deep learning. Data modularity refers to the observation or creation of data groups for various purposes. Task modularity refers to the decomposition of tasks into sub-tasks. Model modularity means that the architecture of a neural network system can be decomposed into identifiable modules. We describe different instantiations of the modularity principle, and we contextualize their advantages in different deep learning sub-fields. Finally, we conclude the paper with a discussion of the definition of modularity and directions for future research.

Cite

CITATION STYLE

APA

Sun, H., & Guyon, I. (2023). Modularity in Deep Learning: A Survey. In Lecture Notes in Networks and Systems (Vol. 739 LNNS, pp. 561–595). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-37963-5_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free