Data-intensive applications are becoming commonplace in all science disciplines. They are comprised of a rich set of sub-domains such as data engineering, deep learning, and machine learning. These applications are built around efficient data abstractions and operators that suit the applications of different domains. Often lack of a clear definition of data structures and operators in the field has led to other implementations that do not work well together. The HPTMT architecture that we proposed recently, identifies a set of data structures, operators, and an execution model for creating rich data applications that links all aspects of data engineering and data science together efficiently. This paper elaborates and illustrates this architecture using an end-to-end application with deep learning and data engineering parts working together. Our analysis show that the proposed system architecture is better suited for high performance computing environments compared to the current big data processing systems. Furthermore our proposed system emphasizes the importance of efficient compact data structures such as Apache Arrow tabular data representation defined for high performance. Thus the system integration we proposed scales a sequential computation to a distributed computation retaining optimum performance along with highly usable application programming interface.
CITATION STYLE
Abeykoon, V., Kamburugamuve, S., Widanage, C., Perera, N., Uyar, A., Kanewala, T. A., … Fox, G. (2022). HPTMT Parallel Operators for High Performance Data Science and Data Engineering. Frontiers in Big Data, 4. https://doi.org/10.3389/fdata.2021.756041
Mendeley helps you to discover research relevant for your work.