This paper provides a preliminary analysis of American Sign Language predicate motion signatures, obtained using a motion capture system, toward identification of a predicate's event structure as telic or atelic. The pilot data demonstrates that production differences between signed predicates can be used to model the probabilities of a predicate belonging to telic or atelic classes based on their motion signature in 3D, using either maximal velocity achieved within the sign, or maximal velocity and minimal acceleration data from each predicate. The solution to the problem of computationally identifying predicate types in ASL video data could significantly simplify the task of identifying verbal complements, arguments and modifiers, which compose the rest of the sentence, and ultimately contribute to solving the problem of automatic ASL recognition.
CITATION STYLE
Malaia, E., Borneman, J., & Wilbur, R. B. (2008). Analysis of ASL Motion capture data towards identification of verb type. In Semantics in Text Processing, STEP 2008 - Conference Proceedings (pp. 155–164). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1626481.1626494
Mendeley helps you to discover research relevant for your work.