Reproducible Model Sharing for AI Practitioners

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

The rapid advancements in AI and Machine Learning (ML) technology, from both industry and academia lead to the need of large-scale, efficient and safe model sharing. With recent models, reproducibility has gained tremendous complexity both on the execution and the resource consumption level. Although sharing source-code and access to data is becoming common practice, the training process is limited by software dependencies, (sometimes large-scale) computation power, specialized hardware, and is time-sensitive. Next to these limitations, trained models are gaining financial value and organizations are reluctant to release models for public access. All these severely hinder the timely dissemination and the scientific sharing and reviewing process, limiting reproducibility. In this work we make the case for transparent and seamless model sharing to enable the ease of reviewing and reproducibility for ML practitioners. We design and implement a platform to enable practitioners to deploy trained models and create easy-to-use inference environments, which can be easily shared with peers, conference reviewers, and/or made publicly available. Our solution follows a provider agnostic practice and can be used internally in institutional infrastructures or public/private cloud providers.

Author supplied keywords

Cite

CITATION STYLE

APA

Moradi, A., & Uta, A. (2021). Reproducible Model Sharing for AI Practitioners. In Proceedings of the 5th Workshop on Distributed Infrastructures for Deep Learning, DIDL 2021 (pp. 1–6). Association for Computing Machinery, Inc. https://doi.org/10.1145/3493652.3505630

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free