Automatic Summarization Method for First-Person-View Video Based on Object Gaze Time

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Several first-person lifelog videos are lengthy in duration, and often include scenes that are not useful. This can be problematic for users as it requires a considerable amount of time to watch such a video. Therefore, in this study, we propose an automatic video summarization system for first-person-view videos by employing gaze tracking and object detection, based on human gaze time on an object. Because gaze is useful for capturing a user’s intention and interest, our approach summarily captures their interest and conscious focal points while watching videos. As a result of the experiment, the evaluation value of the summary video generated by the proposed system exceeded that of the summary video in which important scenes are randomly extracted. From these results, it can be said that our system is useful for rapidly watching videos, and summarizing them to reflect user interest. Our system is applicable in many fields, including behavior recognition, visual diary creation, and support for patients having memory impairment.

Cite

CITATION STYLE

APA

Hamaoka, K., & Kono, Y. (2021). Automatic Summarization Method for First-Person-View Video Based on Object Gaze Time. In Advances in Intelligent Systems and Computing (Vol. 1269 AISC, pp. 39–44). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58282-1_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free