A system to support accurate transcription of information systems lectures for disabled students

  • Papadopoulos M
  • Pearson E
  • 6

    Readers

    Mendeley users who have this article in their library.
  • 1

    Citations

    Citations of this article.

Abstract

Despite the substantial progress that has been made in the area of Automatic Speech Recognition, the performance of current systems is still below the level required for accurate transcription of lectures. This paper explores a different approach focusing on automation of the editing process of lecture transcripts produced by ASR software. The resultant Semantic and Syntactic Transcription Analysing Tool, based on natural language processing and human interface design techniques, is a step forward in the production of meaningful postlecture materials, with minimal investment in time and effort by academic staff and responds to the challenge of meeting the needs of students with disabilities. This paper reports on the results of a study to assess the potential of SSTAT to make the transcription process of Information Systems lectures more efficient and to determine the level of correction required to render the transcripts usable by students with a range of disabilities. © 2011 Papadopoulos & Pearson.

Author-supplied keywords

  • Accessibility
  • Automatic Speech Recognition (ASR)
  • Human-Computer Interaction (HCI)
  • Information Systems Teaching
  • Natural Language Processing (NLP)

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

  • SGR: 84869115748
  • PUI: 366052166
  • SCOPUS: 2-s2.0-84869115748
  • ISBN: 9781742102399

Authors

  • Miltiades Papadopoulos

  • Elaine Pearson

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free