Automatic Summarization of Activities Depicted in Instructional Videos by Use of Speech Analysis

5Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing activity recognition based assistive living solutions have adopted a relatively rigid approach to modelling activities. To address the deficiencies of such approaches, a goal-oriented solution has been proposed that will offer a method of flexibly modelling activities. This approach does, however, have a disadvantage in that the performance of goals may vary hence requiring differing video clips to be associated with these variations. In order to address this shortcoming, the use of rich metadata to facilitate automatic sequencing and matching of appropriate video clips is necessary. This paper introduces a mechanism of automatically generating rich metadata which details the actions depicted in video files to facilitate matching and sequencing. This mechanism was evaluated with 14 video files, producing annotations with a high degree of accuracy.

Cite

CITATION STYLE

APA

Rafferty, J., Nugent, C. D., Liu, J., & Chen, L. (2014). Automatic Summarization of Activities Depicted in Instructional Videos by Use of Speech Analysis. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8868, 123–130. https://doi.org/10.1007/978-3-319-13105-4_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free