Very large data sets often have a flat but regular structure and span multiple disks and machines. Examples include telephone call records, network logs, and web document repositories. These large data sets are not amenable to study using traditional database techniques, if only because they can be too large to fit in a single relational database. On the other hand, many of the analyses done on them can be expressed using simple, easily distributed computations: filtering, aggregation, extraction of statistics, and so on. We present a system for automating such analyses. A filtering phase, in which a query is expressed using a new procedural programming language, emits data to an aggregation phase. Both phases are distributed over hundreds or even thousands of computers. The results are then collated and saved to a file. The design - including the separation into two phases, the form of the programming language, and the properties of the aggregators - exploits the parallelism inherent in having data and computation distributed across many machines. © 2005 IOS Press and the authors. All rights reserved.
CITATION STYLE
Pike, R., Dorward, S., Griesemer, R., & Quinlan, S. (2005). Interpreting the data: Parallel analysis with Sawzall. Scientific Programming, 13(4 SPEC. ISS.), 277–298. https://doi.org/10.1155/2005/962135
Mendeley helps you to discover research relevant for your work.