Big data hadoop mapreduce job scheduling: A short survey

10Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A latest peta to zeta era occurs from various complex digital world information, continuously collecting from device to device, social sites, etc., expressed as large information (as big data). Because of that we are unable to store and process due to lack of scalable and efficient schedulers. A main reason that day by day data is twice over digital world is database’s size changes to zeta from tera. An apache open source Hadoop is the latest and innovative marketing weapon to grip huge volume of information through its classical and flexible components that are Hadoop distributed file system and Reduce-map, to defeat efficiently, store and serve different services on immense magnitude of world digital text, image, audio, and video data. To build and select an innovative and well-organized scheduler is an important key factor for selecting nodes and optimize and achieve high performance in complex information. A latest and useful survey, examination and overview uses and lacks facilities on Hadoop scheduler algorithms that are recognized throughout paper.

Cite

CITATION STYLE

APA

Deshai, N., Sekhar, B. V. D. S., Venkataramana, S., Srinivas, K., & Varma, G. P. S. (2019). Big data hadoop mapreduce job scheduling: A short survey. In Advances in Intelligent Systems and Computing (Vol. 862, pp. 349–365). Springer Verlag. https://doi.org/10.1007/978-981-13-3329-3_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free