Benchmarking Deep Learning Models for Automatic Ultrasonic Imaging Inspection

27Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The success of deep neural networks in carrying out a wide variety of cognitive tasks also raised expectations regarding the advent of AI for the ultrasonic testing (UT) data interpretation in the Non-destructive evaluation (NDE) field. Though it is a growing area of research, we identify two main barriers that hinder research in the field: the lack of real-world, annotated datasets accessible to the public and the scarceness of benchmarked performance of the state-of-the-art deep learning models. To address these issues, we first introduce a new dataset called 'USimgAIST' which contains more than 7000 real ultrasonic inspection images with both normal cases and defective ones from 17 types of flaws. Using the dataset, we performed a comprehensive evaluation of representative deep learning models. Through the study, we expect to validate whether existing AI models can achieve human-level ultrasonic image understanding for defect characterization. Besides, all detailed benchmarking comparisons, including defect detection accuracy, model complexity, memory usage, and inference time, are shown. We hope this study exhibits an overview of performances of advanced learning models working for ultrasonic image analysis and lays the groundwork for prospective practitioners to compare their methods and results fairly.

Cite

CITATION STYLE

APA

Ye, J., & Toyama, N. (2021). Benchmarking Deep Learning Models for Automatic Ultrasonic Imaging Inspection. IEEE Access, 9, 36986–36994. https://doi.org/10.1109/ACCESS.2021.3062860

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free