Abstract
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code that supports symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce DL code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. While hybrid approaches aim for the 'best of both worlds,' the challenges in applying them in the real world are largely unknown. We conduct a data-driven analysis of challenges-and resultant bugs-involved in writing reliable yet performant imperative DL code by studying 250 open-source projects, consisting of 19.7 MLOC, along with 470 and 446 manually examined code patches and bug reports, respectively. The results indicate that hybridization: (i) is prone to API misuse, (ii) can result in performance degradation-the opposite of its intention, and (iii) has limited application due to execution mode incompatibility. We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code, potentially benefiting DL practitioners, API designers, tool developers, and educators.
Author supplied keywords
Cite
CITATION STYLE
Velez, T. C., Khatchadourian, R., Bagherzadeh, M., & Raja, A. (2022). Challenges in Migrating Imperative Deep Learning Programs to Graph Execution: An Empirical Study. In Proceedings - 2022 Mining Software Repositories Conference, MSR 2022 (pp. 469–481). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3524842.3528455
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.