Abstract
During natural and man-made disasters, people use social media platforms such as Twitter to post textual and multimedia content to report updates about injured or dead people, infrastructure damage, and missing or found people among other information types. Studies have revealed that this online information, if processed timely and effectively, is extremely useful for humanitarian organizations to gain situational awareness and plan relief operations. In addition to the analysis of textual content, recent studies have shown that imagery content on social media can boost disaster response significantly. Despite extensive research that mainly focuses on textual content to extract useful information, limited work has focused on the use of imagery content or the combination of both content types. One of the reasons is the lack of labeled imagery data in this domain. Therefore, in this paper, we aim to tackle this limitation by releasing a large multimodal dataset collected from Twitter during different natural disasters. We provide three types of annotations, which are useful to address a number of crisis response and management tasks for different humanitarian organizations.
Cite
CITATION STYLE
Alam, F., Ofli, F., & Imran, M. (2018). CrisisMMD: Multimodal twitter datasets from natural disasters. In 12th International AAAI Conference on Web and Social Media, ICWSM 2018 (pp. 465–473). AAAI Press. https://doi.org/10.1609/icwsm.v12i1.14983
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.