Benchmarking for Graph Clustering and Partitioning

  • Bader D
  • Meyerhenke H
  • Sanders P
  • et al.
N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Synonyms Algorithm evaluation; Graph repository; Test instances Glossary Benchmarking Performance evaluation for comparison to the state of the art Benchmark Suite Set of instances used for benchmarking Definition Benchmarking refers to a repeatable performance evaluation as a means to compare somebody's R. Alhajj, J. Rokne (eds.), Encyclopedia of Social Network Analysis and Mining, B 74 Benchmarking for Graph Clustering and Partitioning work to the state of the art in the respective field. As an example, benchmarking can compare the computing performance of new and old hardware. In the context of computing, many different benchmarks of various sorts have been used. A prominent example is the Linpack benchmark of the TOP500 list of the fastest computers in the world, which measures the performance of the hardware by solving a dense linear alge-bra problem. Different categories of benchmarks include sequential vs. parallel, microbenchmark vs. application, or fixed code vs. informal prob-lem description. See, e.g., Weicker (2002) for a more detailed treatment of hardware evaluation. When it comes to benchmarking algorithms for network analysis, typical measures of interest are solution quality and running time. The comparison process requires the establishment of widely accepted benchmark instances on which the algorithms have to compete. In the course of the 10th DIMACS Implementation Challenge on Graph Partitioning and Graph Clustering (Bader et al. 2012), we have assembled a suite of graphs and graph generators intended for comparing graph algorithms with each other. While our particular focus has been on assembling instances for benchmarking graph partitioning and graph clustering algorithms, we believe the suite to be useful for related fields as well. This includes the broad field of network analysis (which includes graph clustering, also known as community detection) and various combinatorial problems. The purpose of DIMACS Implementation Challenges is to assess the practical performance of algorithms in a respective problem domain. These challenges are scientific competitions in areas where worst case and probabilistic analysis yield unrealistic results. Where analysis fails, experimentation can provide insights into realistic algorithm performance. By evaluating different implementations on the assembled benchmark suite, the challenges create a reproducible picture of the state of the art in the area under consideration. This helps to foster an effective technology transfer within the research areas of algorithms, data structures, and implementation techniques as well as a transfer back to the original applications.

Cite

CITATION STYLE

APA

Bader, D. A., Meyerhenke, H., Sanders, P., Schulz, C., Kappes, A., & Wagner, D. (2014). Benchmarking for Graph Clustering and Partitioning. In Encyclopedia of Social Network Analysis and Mining (pp. 73–82). Springer New York. https://doi.org/10.1007/978-1-4614-6170-8_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free