Robustness is an important concern in machine learning and pattern recognition, and has attracted a lot of attention from technical and scientific viewpoints. Actually, the robustness models the capacity of a computerized approach to resist to perturbing phenomena and data uncertainties, and generate common artefact while designing algorithms. However, this question has not been dealt in depth in such a way for image processing tasks. In this article, we propose a novel definition of robustness dedicated to image processing algorithms. By considering a generalized model of image data uncertainty, we encompass the classic additive Gaussian noise alteration that we study through the evaluation of image denoising algorithms, but also more complex phenomena such as shape variability, which is considered for liver volume segmentation from medical images. Furthermore, we refine our evaluation of robustness wrt. our previous work by introducing a novel quality-scale definition. To do so, we calculate the worst loss of quality for a given algorithm over a set of uncertainty scales, together with the scale where this drop appears. This new approach permits to reveal any algorithm’s weakness, and for which kind of corrupted data it may happen.
CITATION STYLE
Vacavant, A., Lebre, M. A., Rositi, H., Grand-Brochier, M., & Strand, R. (2019). New Definition of Quality-Scale Robustness for Image Processing Algorithms, with Generalized Uncertainty Modeling, Applied to Denoising and Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11455 LNCS, pp. 138–149). Springer Verlag. https://doi.org/10.1007/978-3-030-23987-9_13
Mendeley helps you to discover research relevant for your work.