Abstract
Digital images appearing on displays in everyday activities (e.g., photos on a smartphone) are automatically and instantly rendered without manual intervention such that we can seamlessly appreciate them. In contrast, shape displays require manual designs of outputs upon actuation of input images to render 3D shapes. In this work, we aim to achieve automatic and on-The-spot actuation of digital images so that we can seamlessly see 3D physical images. To this end, we developed BulkScreen, an image projection system that can automatically render 3D shapes of input images on a vertical pinarray screen. Our approach is based on a deep-neural-network saliency estimation coupled with our post-processing algorithm. We believe this spontaneous actuation mechanism facilitates applications with shape displays such as real-Time picture browsing and display advertisement, building on the benefit of representing physical shapes; tangibility.
Author supplied keywords
Cite
CITATION STYLE
Arakawa, R., Tanaka, Y., Kawarasaki, H., & Maeada, K. (2020). Bulkscreen: Saliency-based automatic shape representation of digital images with a vertical pin-Array screen. In TEI 2020 - Proceedings of the 14th International Conference on Tangible, Embedded, and Embodied Interaction (pp. 461–466). Association for Computing Machinery, Inc. https://doi.org/10.1145/3374920.3374973
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.