Annotations delineating regions of interest can provide valuable information for training medical image classification and segmentation methods. However the process of obtaining annotations is tedious and time-consuming, especially for high-resolution volumetric images. In this paper we present a novel learning framework to reduce the requirement of manual annotations while achieving competitive classification performance. The approach is evaluated on a dataset with 59 3D optical projection tomography images of colorectal polyps. The results show that the proposed method can robustly infer patterns from partially annotated images with low computational cost. © 2013 Springer-Verlag.
CITATION STYLE
Li, W., Zhang, J., Zheng, W. S., Coats, M., Carey, F. A., & McKenna, S. J. (2013). Learning from partially annotated OPT images by contextual relevance ranking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8151 LNCS, pp. 429–436). https://doi.org/10.1007/978-3-642-40760-4_54
Mendeley helps you to discover research relevant for your work.