Multimodal Combination of Text and Image Tweets for Disaster Response Assessment
Date
2022-07-06Author
Kotha, Saideshwar
Haridasan, Smitha
Rattani, Ajita
Bowen, Aaron
Rimmington, Glyn
Dutta, Atri
Metadata
Show full item recordCitation
Kotha, S., Haridasan, S., Rattani, A., Bowen, A., Rimmington, G., Dutta, A. (2022, July 6). Multimodal combination of text and image tweets for disaster response assessment. International Workshop on Data-driven Resilience Research, Leipzig, Germany. https://dataweek.de/d2r2-22/
Abstract
Social media platforms are a vital source of information in times of natural and man-made disasters.
People use social media to report updates about injured or dead people, infrastructure damage, missing
or found people among other information. Studies show that social media data, if processed timely
and effectively, could provide important insight to humanitarian organizations to plan relief activities.
However, real-time analysis of social media data using machine learning algorithms poses multiple
challenges and requires processing large amounts of labeled data. Multi-modal Twitter Datasets from
Natural Disasters (CrisisMMD) is one of the dataset that provide annotations as well as textual and
image data to help researchers develop a crisis response system. In this paper, we analyzed multi-modal
data from CrisisMMD, related to seven major natural calamities like earthquakes, floods, hurricanes,
wildfires, etc., and proposed an effective fusion-based decision making technique to classify social media
data into Informative and Non-informative categories. The Informative tweets are then classified into
various humanitarian categories such as rescue volunteering or donation efforts, not-humanitarian,
infrastructure and utility damage, affected individuals, and other relevant information. The proposed
multi-modal fusion methodology outperforms the text tweets-based baseline by 6.98% in the Informative
category and 11.2% in the Humanitarian category, while it outperforms image tweets-based baselines by
4.5% in the Informative category and 6.39% in the humanitarian category.