Membox: Shared memory device for memory-centric computing applicable to deep learning problems

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Large-scale computational problems that need to be addressed in modern computers, such as deep learning or big data analysis, cannot be solved in a single computer, but can be solved with distributed computer systems. Since most distributed computing systems, consisting of a large number of networked computers, should propagate their computational results to each other, they can suffer the problem of an increasing overhead, resulting in lower computational efficiencies. To solve these problems, we proposed an architecture of a distributed system that used a shared memory that is simultaneously accessible by multiple computers. Our architecture aimed to be implemented in FPGA or ASIC. Using an FPGA board that implemented our architecture, we configured the actual distributed system and showed the feasibility of our system. We compared the results of the deep learning application test using our architecture with that using Google Tensorflow’s parameter server mechanism. We showed improvements in our architecture beyond Google Tensorflow’s parameter server mechanism and we determined the future direction of research by deriving the expected problems.

Cite

CITATION STYLE

APA

Choi, Y., Lim, E., Shin, J., & Lee, C. H. (2021). Membox: Shared memory device for memory-centric computing applicable to deep learning problems. Electronics (Switzerland), 10(21). https://doi.org/10.3390/electronics10212720

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free