WildDash-creating hazard-aware benchmarks

31Citations
Citations of this article
121Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Test datasets should contain many different challenging aspects so that the robustness and real-world applicability of algorithms can be assessed. In this work, we present a new test dataset for semantic and instance segmentation for the automotive domain. We have conducted a thorough risk analysis to identify situations and aspects that can reduce the output performance for these tasks. Based on this analysis we have designed our new dataset. Meta-information is supplied to mark which individual visual hazards are present in each test case. Furthermore, a new benchmark evaluation method is presented that uses the meta-information to calculate the robustness of a given algorithm with respect to the individual hazards. We show how this new approach allows for a more expressive characterization of algorithm robustness by comparing three baseline algorithms.

Cite

CITATION STYLE

APA

Zendel, O., Honauer, K., Murschitz, M., Steininger, D., & Domínguez, G. F. (2018). WildDash-creating hazard-aware benchmarks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11210 LNCS, pp. 407–421). Springer Verlag. https://doi.org/10.1007/978-3-030-01231-1_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free