Attention-based view selection networks for light-field disparity estimation

123Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

This paper introduces a novel deep network for estimating depth maps from a light field image. For utilizing the views more effectively and reducing redundancy within views, we propose a view selection module that generates an attention map indicating the importance of each view and its potential for contributing to accurate depth estimation. By exploring the symmetric property of light field views, we enforce symmetry in the attention map and further improve accuracy. With the attention map, our architecture utilizes all views more effectively and efficiently. Experiments show that the proposed method achieves state-of-the-art performance in terms of accuracy and ranks the first on a popular benchmark for disparity estimation for light field images.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Tsai, Y. J., Liu, Y. L., Ouhyoung, M., & Chuang, Y. Y. (2020). Attention-based view selection networks for light-field disparity estimation. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 12095–12103). AAAI press. https://doi.org/10.1609/aaai.v34i07.6888

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 13

76%

Researcher 3

18%

Professor / Associate Prof. 1

6%

Readers' Discipline

Tooltip

Computer Science 12

71%

Engineering 5

29%

Save time finding and organizing research with Mendeley

Sign up for free