Optimization of Neuroprosthetic Vision via End-To-End Deep Reinforcement Learning

19Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual neuroprostheses are a promising approach to restore basic sight in visually impaired people. A major challenge is to condense the sensory information contained in a complex environment into meaningful stimulation patterns at low spatial and temporal resolution. Previous approaches considered task-Agnostic feature extractors such as edge detectors or semantic segmentation, which are likely suboptimal for specific tasks in complex dynamic environments. As an alternative approach, we propose to optimize stimulation patterns by end-To-end training of a feature extractor using deep reinforcement learning agents in virtual environments. We present a task-oriented evaluation framework to compare different stimulus generation mechanisms, such as static edge-based and adaptive end-To-end approaches like the one introduced here. Our experiments in Atari games show that stimulation patterns obtained via task-dependent end-To-end optimized reinforcement learning result in equivalent or improved performance compared to fixed feature extractors on high difficulty levels. These findings signify the relevance of adaptive reinforcement learning for neuroprosthetic vision in complex environments.

Cite

CITATION STYLE

APA

Küçükoǧlu, B., Rueckauer, B., Ahmad, N., Van Steveninck, J. D. R., Güçlü, U., & Van Gerven, M. (2022). Optimization of Neuroprosthetic Vision via End-To-End Deep Reinforcement Learning. International Journal of Neural Systems, 32(11). https://doi.org/10.1142/S0129065722500526

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free