Big data processing frameworks have received attention because of the importance of high performance computation. They are expected to quickly process a huge amount of data in memory with a simple programming model in a cluster. Apache Spark is becoming one of the most popular frameworks. Several studies have analyzed Spark programs and optimized their performance. Recent versions of Spark generate optimized Java code from a Spark program, but few research works have analyzed and improved such generated code to achieve better performance. Here, two types of problems were analyzed by inspecting generated code, namely, access to column-oriented storage and to a primitive-type array. The resulting performance issues in the generated code and were analyzed, and optimizations that can eliminate inefficient code were devised to solve the issues. The proposed optimizations were then implemented for Spark. Experimental results with the optimizations on a cluster of five Intel machines indicated performance improvement by up to 1.4× for TPC-H queries and by up to 1.4× for machine-learning programs. These optimizations have since been integrated into the release version of Apache Spark 2.3.
CITATION STYLE
Ishizaki, K. (2019). Analyzing and optimizing Java code generation for apache spark query plan. In ICPE 2019 - Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering (pp. 91–102). Association for Computing Machinery, Inc. https://doi.org/10.1145/3297663.3310300
Mendeley helps you to discover research relevant for your work.