An active model for facial feature tracking

81Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a system for finding and tracking a face and extract global and local animation parameters from a video sequence. The system uses an initial colour processing step for finding a rough estimate of the position, size, and inplane rotation of the face, followed by a refinement step drived by an active model. The latter step refines the previous estimate, and also extracts local animation parameters. The system is able to track the face and some facial features in near real-time, and can compress the result to a bitstream compliant to MPEG-4 face and body animation.

References Powered by Scopus

Snakes: Active contour models

13640Citations
N/AReaders
Get full text

Nonlinear dimensionality reduction by locally linear embedding

13162Citations
N/AReaders
Get full text

A global geometric framework for nonlinear dimensionality reduction

11546Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Facial Landmark Detection: A Literature Survey

285Citations
N/AReaders
Get full text

Medical image segmentation on GPUs - A comprehensive review

216Citations
N/AReaders
Get full text

On appearance based face and facial action tracking

77Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Ahlberg, J. (2002). An active model for facial feature tracking. Eurasip Journal on Applied Signal Processing, 2002(6), 566–571. https://doi.org/10.1155/S1110865702203078

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 12

60%

Researcher 4

20%

Professor / Associate Prof. 2

10%

Lecturer / Post doc 2

10%

Readers' Discipline

Tooltip

Computer Science 10

50%

Engineering 8

40%

Neuroscience 1

5%

Psychology 1

5%

Save time finding and organizing research with Mendeley

Sign up for free