Abstract
In the era of Big Data, dynamic programming (DP) is an increasingly attractive solution for solving large real-world optimization problems. Examples include RNA folding, gerrymandering, and scheduling. DP is widely used as a polynomial time solution for large problems whose solutions have complicated interdependencies. DP involves many rounds of computation that rely on results from earlier rounds. Intra-round computations can be computed independently of each other, which lends itself to parallelization. However, parallelizing this algorithm is complicated by its imbalanced computation between rounds, the sparse nature of the solution set, and the large-sized applications we target. Prior work on parallel wavefront-style dynamic programming (WDP) problems has focused primarily on shared memory parallelism and is almost exclusively theoretical. Our work develops novel parallel solutions that require distributed memory parallelism for solving large-sized problems. We experimentally evaluate performance of our two different partitioning and two messaging schemes on a small cluster and on a supercomputer-class machine. Our results show significant performance improvements using our distributed parallelization (speedups of up to 26 times faster using 45 processes over sequential). Our solution efficiently scales to large degrees of distributed parallelization, which is necessary for solving large-sized problems, problem that are too big to fit into GPU memory or even into RAM on a single node.
Author supplied keywords
Cite
CITATION STYLE
Ferguson, M., Fontes, L., & Newhall, T. (2023). Efficient Parallelization of Dynamic Programming for Large Applications. In PEARC 2023 - Computing for the common good: Practice and Experience in Advanced Research Computing (pp. 1–9). Association for Computing Machinery, Inc. https://doi.org/10.1145/3569951.3593600
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.