AIBench: Towards Scalable and Comprehensive Datacenter AI Benchmarking

23Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

AI benchmarking provides yardsticks for benchmarking, measuring and evaluating innovative AI algorithms, architecture, and systems. Coordinated by BenchCouncil, this paper presents our joint research and engineering efforts with several academic and industrial partners on the datacenter AI benchmarks—AIBench. The benchmarks are publicly available from http://www.benchcouncil.org/AIBench/index.html. Presently, AIBench covers 16 problem domains, including image classification, image generation, text-to-text translation, image-to-text, image-to-image, speech-to-text, face embedding, 3D face recognition, object detection, video prediction, image compression, recommendation, 3D object reconstruction, text summarization, spatial transformer, and learning to rank, and two end-to-end application AI benchmarks. Meanwhile, the AI benchmark suites for high performance computing (HPC), IoT, Edge are also released on the BenchCouncil web site. This is by far the most comprehensive AI benchmarking research and engineering effort.

Author supplied keywords

Cite

CITATION STYLE

APA

Gao, W., Luo, C., Wang, L., Xiong, X., Chen, J., Hao, T., … Zhan, J. (2019). AIBench: Towards Scalable and Comprehensive Datacenter AI Benchmarking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11459 LNCS, pp. 3–9). Springer. https://doi.org/10.1007/978-3-030-32813-9_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free