This paper describes a methodology for working with distributed systems, and achieve performance in Big Data, through the framework Hadoop, Python programming language, and Apache Hive module. The efficiency of the proposed methodology is tested through a case study that addresses a real problem found in the supercomputing environment of the Center for Weather Forecasting and Climate Studies linked to the Brazilian Institute for Space Research (CPTEC / INPE), which provides Society a work able to predict disasters and save people lives. In all three experiments involving the issue, using the Cray XT-6 supercomputer: (i) the first issue involves programming in Python and a sequential and monoprocessed arquitecture; (ii) the second uses Python and Hadoop framework, over parallel and distributed arquitecture; (iii) the latter combines Hadoop and Hive in a parallel and distributed arquitecture. The main results of these experiments are compared, discussed, and topics beyond the scope in this research are exposed as recommendations and suggestions for future work.
CITATION STYLE
Ramos, M. P., Tasinaffo, P. M., de Almeida, E. S., Achite, L. M., da Cunha, A. M., & Dias, L. A. V. (2016). Distributed systems performance for big data. In Advances in Intelligent Systems and Computing (Vol. 448, pp. 733–744). Springer Verlag. https://doi.org/10.1007/978-3-319-32467-8_64
Mendeley helps you to discover research relevant for your work.