En la Búsqueda de Soluciones MapReduce Modulares para el Trabajo con BigData: Hadoop Orientado a Aspectos

  • Vidal-Silva C
  • Bustamante M
  • Lapo M
  • et al.
N/ACitations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Abstract: Justly, in the search for modular MapReduce solutions, the main goal of this work is to apply Hadoop and AspectJ for the definition of Aspect-Combine functions. MapReduce is a computing approach to work with large volumes of data (BigData) in a distributed environment, with high levels of abstraction and the ordered use of Map and Reduce functions, the first one for mapping or identifying relevant data and the second for resuming data and final results. In a MapReduce system, Mapper and Reducer nodes implement the Map and Reduce functions respectively. Hadoop is a free application of MapReduce that allows the definition of Combine functions. However, the execution of Combine is not guaranteed in Hadoop. This problem motivated this work. As a result, a greater degree of modularization is reached from a theoretical point of view. From the practical point of view there are also improvements in performance.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Vidal-Silva, C. L., Bustamante, M. A., Lapo, M. del C., & Núñez, M. de los Á. (2018). En la Búsqueda de Soluciones MapReduce Modulares para el Trabajo con BigData: Hadoop Orientado a Aspectos. Información Tecnológica, 29(2), 133–140. https://doi.org/10.4067/s0718-07642018000200133

Readers' Seniority

Tooltip

Lecturer / Post doc 1

100%

Readers' Discipline

Tooltip

Computer Science 2

40%

Engineering 2

40%

Arts and Humanities 1

20%

Article Metrics

Tooltip
Mentions
News Mentions: 3

Save time finding and organizing research with Mendeley

Sign up for free