An analytical evaluation of search by content and interaction patterns on multimodal meeting records

  • Bouamrane M
  • Luz S
  • 9

    Readers

    Mendeley users who have this article in their library.
  • 6

    Citations

    Citations of this article.

Abstract

It has been suggested that combining content-based indexing with automatically generated temporal metadata might help improve search and browsing of recordings of computer-mediated collaborative activities such as on-line meetings, which are characterised by extensive multimodal communication. This paper presents an analytical evaluation of the effectiveness of these techniques as implemented through automatic speech recognition and temporal mapping. In particular, it assesses the extent to which this strategy can help uncover contextual relationships between audio and text segments in recorded remote meetings. Results show that even simple temporal mapping can effectively support retrieval of recorded audio segments, improve retrieval performance in situations where speech recognition alone would have exhibited prohibitively high word error rates, and provide a basic form of semantic adaptation.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Matt M. Bouamrane

  • Saturnino Luz

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free