Stacking of SVMs for Classifying Intangible Cultural Heritage Images

9Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Our investigation aims at classifying images of the intangible cultural heritage (ICH) in the Mekong Delta, Vietnam. We collect an images dataset of 17 ICH categories and manually annotate them. The comparative study of the ICH image classification is done by the support vector machines (SVM) and many popular vision approaches including the handcrafted features such as the scale-invariant feature transform (SIFT) and the bag-of-words (BoW) model, the histogram of oriented gradients (HOG), the GIST and the automated deep learning of invariant features like VGG19, ResNet50, Inception v3, Xception. The numerical test results on 17 ICH dataset show that SVM models learned from Inception v3 and Xception features give good accuracy of 61.54% and 62.89% respectively. We propose to stack SVM models using different visual features to improve the classification result performed by any single one. Triplets (SVM-Xception, SVM-Inception-v3, SVM-VGG19), (SVM-Xception, SVM-Inception-v3, SVM-SIFT-BoW) achieve 65.32% of the classification correctness.

Cite

CITATION STYLE

APA

Do, T. N., Pham, T. P., Pham, N. K., Nguyen, H. H., Tabia, K., & Benferhat, S. (2020). Stacking of SVMs for Classifying Intangible Cultural Heritage Images. In Advances in Intelligent Systems and Computing (Vol. 1121 AISC, pp. 186–196). Springer. https://doi.org/10.1007/978-3-030-38364-0_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free