Vr interface for designing multi-view-camera layout in a large-scale space

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Attention has been focused on sports broadcast, which used a free-viewpoint video that integrates multi-viewpoint images inside a computer and reproduces the appearance observed at arbitrary viewpoint. In a multi-view video shooting, it is necessary to arrange multiple cameras to surround the target space. In a large-scale space such a soccer stadium, it is necessary to determine where the cameras can be installed and to understand what kind of multi-view video can be shot. However, it is difficult to get such information in advance so that “location hunting” is needed usually. This paper presents a VR interface for supporting the preliminary consideration of multi-view camera arrangement in large-scale space. This VR interface outputs the multi-view camera layout on the 3D model from the shooting requirements for multi-view camera shooting and the viewing requirements for observation of the generated video. By using our interface, it is expected that the labor and time required to determine the layout of multi-view cameras can be drastically reduced.

Cite

CITATION STYLE

APA

Matsubara, N., Shishido, H., & Kitahara, I. (2020). Vr interface for designing multi-view-camera layout in a large-scale space. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12242 LNCS, pp. 130–140). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58465-8_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free