Framework for parallelisation on big data

12Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

The parallelisation of big data is emerging as an important framework for large-scale parallel data applications such as seismic data processing. The field of seismic data is so large or complex that traditional data processing software is incapable of dealing with it. For example, the implementation of parallel processing in seismic applications to improve the processing speed is complex in nature. To overcome this issue, a simple technique which that helps provide parallel processing for big data applications such as seismic algorithms is needed. In our framework, we used the Apache Hadoop with its MapReduce function. All experiments were conducted on the RedHat CentOS platform. Finally, we studied the bottlenecks and improved the overall performance of the system for seismic algorithms (stochastic inversion).

Cite

CITATION STYLE

APA

Rahim, L. A., Kudiri, K. M., & Bahattacharjee, S. (2019). Framework for parallelisation on big data. PLoS ONE, 14(5). https://doi.org/10.1371/journal.pone.0214044

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free