Efficient OR Hadoop: Why Not Both?

  • Dittrich J
  • Richter S
  • Schuh S
N/ACitations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this article1, we give an overview of research related to Big Data processing in Hadoop going on at the Information Systems Group at Saarland University. We discuss how to make Hadoop efficient. We briefly survey four of our projects in this context: Hadoop++, Trojan Layouts, HAIL, and LIAH. All projects aim to provide efficient physical layouts in Hadoop including vertically partitioned data layouts, clustered indexes, and adaptively created clustered indexes. Most of our techniques come (almost) for free: they create little to no overhead in comparison to standard Hadoop.

Cite

CITATION STYLE

APA

Dittrich, J., Richter, S., & Schuh, S. (2013). Efficient OR Hadoop: Why Not Both? Datenbank-Spektrum, 13(1), 17–22. https://doi.org/10.1007/s13222-012-0111-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free