Learning Visual Dictionaries from Class-Specific Superpixel Segmentation

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual dictionaries (Bag of Visual Words - BoVW) can be a very powerful technique for image description whenever exists a reduced number of training images, being an attractive alternative to deep learning techniques. Nevertheless, models for BoVW learning are usually unsupervised and rely on the same set of visual words for all images in the training set. We present a method that works with small supervised training sets. It first generates superpixels from multiple images of a same class, for interest point detection, and then builds one visual dictionary per class. We show that the detected interest points can be more relevant than the traditional ones (e.g., grid sampling) in the context of a given application—the classification of intestinal parasite images. The study uses three image datasets, with a total of 15 different species of parasites, and a diverse class, namely impurity, which makes the problem difficult with examples similar to all the remaining classes of parasites.

Cite

CITATION STYLE

APA

Castelo-Fernández, C., & Falcão, A. X. (2019). Learning Visual Dictionaries from Class-Specific Superpixel Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11678 LNCS, pp. 171–182). Springer Verlag. https://doi.org/10.1007/978-3-030-29888-3_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free