Ramifications of incorrect image segmentations; emphasizing on the potential effects on deep learning methods failure

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Introduction: Detecting failure cases is critical to ensure a secure self-driving system. Any flaw in the system directly results in an accident. In genuine class, the model’s probability reflects better-reflected model confidence. As a result, the confidence distributions of failed predictions were changed to lower values. In contrast, accurate predictions were remained associated with high values, allowing for considerably more excellent separability between such prediction types. The study investigates the association of ramifications with computational color constancy that can negatively influence CNN’s image classification and semantic segmentation. Methodology: Image datasets were used to conduct different scales and complexity experiments. For instance, minimal and straightforward images of digits were comparatively provided through MNIST and SVHN datasets. The dataset’s standard validation set was employed to test and compute additional metrics because ground truth that is not publicly available for some test sets. Results: The results depicted that baseline methods were outperformed through the proposed approach with a considerable variant on minimal datasets or models in every context. Therefore, Transmission Control Protocol (TCP) is appropriate in failure prediction, and ConfidNet is competent to be fulfilled as confidence criterion. Further, one of the solutions would be to elevate the validation set size, but this would influence the prediction performance of a failure model. On the contrary, the confidence estimation was based on models with test predictive performance levels, similar to baselines. Conclusions: The gap between validation accuracy and training accuracy was significant on CIFAR-100, which indicates the modest enhancement for failure detection via the validation set.

Cite

CITATION STYLE

APA

Al-Dmour, H. (2022). Ramifications of incorrect image segmentations; emphasizing on the potential effects on deep learning methods failure. Journal of Big Data, 9(1). https://doi.org/10.1186/s40537-022-00624-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free