Design issues of big data parallelisms

8Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Data Intensive Computing for Scientific Research needs effective tools for data capture, curate them for designing appropriate algorithms and multidimensional analysis for effective decision making for the society. Different computational environments used for different data intensive problems such as Sentiment Analysis and Opinion Mining of Social media, Massive Open Online Courses (MOOCs), Large Hadron Collider of CERN, Square Kilometer Array (SKA) of radio telescopes project, are usually capable of generating exabytes (EB) of data per day, but present situations limits them to more manageable data collection rates. Different disciplines and data generation rates of different lab experiments, online as well as offline, make the issue of creating effective tools a formidable problem. In this paper we will discuss about different data intensive computing tools, trends of different emerging technologies, how big data processing heavily relying on those effective tools and how it helps in creating different models and decision making.

Cite

CITATION STYLE

APA

Mondal, K. (2016). Design issues of big data parallelisms. In Advances in Intelligent Systems and Computing (Vol. 434, pp. 209–217). Springer Verlag. https://doi.org/10.1007/978-81-322-2752-6_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free