Learning from partially annotated OPT images by contextual relevance ranking

1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Annotations delineating regions of interest can provide valuable information for training medical image classification and segmentation methods. However the process of obtaining annotations is tedious and time-consuming, especially for high-resolution volumetric images. In this paper we present a novel learning framework to reduce the requirement of manual annotations while achieving competitive classification performance. The approach is evaluated on a dataset with 59 3D optical projection tomography images of colorectal polyps. The results show that the proposed method can robustly infer patterns from partially annotated images with low computational cost. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Li, W., Zhang, J., Zheng, W. S., Coats, M., Carey, F. A., & McKenna, S. J. (2013). Learning from partially annotated OPT images by contextual relevance ranking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8151 LNCS, pp. 429–436). https://doi.org/10.1007/978-3-642-40760-4_54

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free