Generic multimedia multimodal agents paradigms and their dynamic reconfiguration at the architectural level

11Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The multimodal fusion for natural human-computer interaction involves complex intelligent architectures which are subject to the unexpected errors and mistakes of users. These architectures should react to events occurring simultaneously, and possibly redundantly, from different input media. In this paper, intelligent agent-based generic architectures for multimedia multimodal dialog protocols are proposed. Global agents are decomposed into their relevant components. Each element is modeled separately. The elementary models are then linked together to obtain the full architecture. The generic components of the application are then monitored by an agent-based expert system which can then perform dynamic changes in reconfiguration, adaptation, and evolution at the architectural level. For validation purposes, the proposed multiagent architectures and their dynamic reconfiguration are applied to practical examples, including a W3C application.

Cite

CITATION STYLE

APA

Djenidi, H., Benarif, S., Ramdane-Cherif, A., Tadj, C., & Levy, N. (2004). Generic multimedia multimodal agents paradigms and their dynamic reconfiguration at the architectural level. Eurasip Journal on Applied Signal Processing, 2004(11), 1688–1707. https://doi.org/10.1155/S1110865704402212

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free