Movienet: a movie multilayer network model using visual and textual semantic cues

Citations of this article
Mendeley users who have this article in their library.

This article is free to access.


Discovering content and stories in movies is one of the most important concepts in multimedia content research studies. Network models have proven to be an efficient choice for this purpose. When an audience watches a movie, they usually compare the characters and the relationships between them. For this reason, most of the modelsdeveloped so far are based on social networks analysis. They focus essentially on the characters at play. By analyzing characters interactions, we can obtain a broad picture of the narration’s content. Other works have proposed to exploit semantic elements such as scenes, dialogues, etc. However, they are always captured from a single facet. Motivated by these limitations, we introduce in this work a multilayer network model to capture the narration of a movie based on its script, its subtitles, and the movie content. After introducing the model and the extraction process from the raw data, weperform a comparative analysis of the whole 6-movie cycle of the Star Wars saga. Results demonstrate the effectiveness of the proposed framework for video content representation and analysis.




Mourchid, Y., Renoust, B., Roupin, O., Văn, L., Cherifi, H., & Hassouni, M. E. (2019). Movienet: a movie multilayer network model using visual and textual semantic cues. Applied Network Science, 4(1).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free