GPU accelerated circuit analysis using machine learning-based parallel computing model

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Circuit simulators have the capability to create virtual environment to test circuit design. Simulators save time and hardware cost. However, when components in circuit design increase, most simulators take longer time to test large circuit design, in many cases days or even weeks. Therefore, to handle large dataset and accurate performance, simulators need to be improved. In this paper, we propose machine learning-based parallel implementations of circuit analyser on graphics card with Compute Unified Device Architecture (CUDA). After parsing netlist file, the first approach is to analyse compute intensive mathematical functions and then convert it into parallel executable version. Further, we propose a Design-Level Parallelism with hybrid parallel implementation of components and processing methods. Dynamic decision-making is required to select functions and parameters to map on Graphics Processing Unit (GPU). To reduce load overhead, machine learning clustering approach has been adopted. Combination of procedure clustering and mapping takes few cycles but overall performance enhances efficiency as compared to serial processing.

Cite

CITATION STYLE

APA

Jagtap, S. V., & Rao, Y. S. (2020). GPU accelerated circuit analysis using machine learning-based parallel computing model. SN Applied Sciences, 2(5). https://doi.org/10.1007/s42452-020-2667-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free