Abstract
PurposeThis study aims to document the early stages of development of an unsupervised, deep learning-based clinical annotation and segmentation tool (CAST) capable of isolating clinically significant teeth in both intraoral photographs and their corresponding oral radiographs.MethodsThe dataset consisted of 172 intraoral photographs and 424 dental radiographs, manually annotated by two operators, augmented to yield 6258 images for training, 183 for validation, and 98 for testing. The training involved the use of an object detection model (‘YOLOv8’) combined with a feature extraction system (‘Segment Anything Model’). This combination enabled the auto-annotation and segmentation of tooth-related features and lesions in both types of images without operator intervention. Outputs were further processed using a data relabelling tool (‘X-AnyLabeling’) enabling the option to manually reannotate erroneous data outputs through reinforcement learning.ResultsThe trained object detection model achieved a mean average precision (mAP) of 77.4%, with precision and recall rates of 75.0% and 72.1%, respectively. The model was able to segment features from oral images annotated by polygonal boundaries better than radiological images annotated using bounding boxes.ConclusionThe development of the auto-annotation and segmentation tool showed initial promise in automating the image labelling and segmentation process for intraoral images and radiographs. Further work is required to address the limitations.
Cite
CITATION STYLE
Farook, T. H., Saad, F. H., Ahmed, S., & Dudley, J. (2023). Clinical Annotation and Segmentation Tool (CAST) Implementation for Dental Diagnostics. Cureus. https://doi.org/10.7759/cureus.48734
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.