Decoding generic visual representations from human brain activity using machine learning

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Among the most impressive recent applications of neural decoding is the visual representation decoding, where the category of an object that a subject either sees or imagines is inferred by observing his/her brain activity. Even though there is an increasing interest in the aforementioned visual representation decoding task, there is no extensive study of the effect of using different machine learning models on the decoding accuracy. In this paper we provide an extensive evaluation of several machine learning models, along with different similarity metrics, for the aforementioned task, drawing many interesting conclusions. That way, this paper (a) paves the way for developing more advanced and accurate methods and (b) provides an extensive and easily reproducible baseline for the aforementioned decoding task.

Cite

CITATION STYLE

APA

Papadimitriou, A., Passalis, N., & Tefas, A. (2019). Decoding generic visual representations from human brain activity using machine learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11131 LNCS, pp. 597–606). Springer Verlag. https://doi.org/10.1007/978-3-030-11015-4_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free