• Login
    View Item 
    •   Shocker Open Access Repository Home
    • University Libraries
    • UL Faculty Profiles
    • Aaron Bowen
    • View Item
    •   Shocker Open Access Repository Home
    • University Libraries
    • UL Faculty Profiles
    • Aaron Bowen
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Multimodal Combination of Text and Image Tweets for Disaster Response Assessment

    View/Open
    Text of paper from conference proceedings (465.0Kb)
    Slides from the presentation (2.322Mb)
    Date
    2022-07-06
    Author
    Kotha, Saideshwar
    Haridasan, Smitha
    Rattani, Ajita
    Bowen, Aaron
    Rimmington, Glyn
    Dutta, Atri
    Metadata
    Show full item record
    Citation
    Kotha, S., Haridasan, S., Rattani, A., Bowen, A., Rimmington, G., Dutta, A. (2022, July 6). Multimodal combination of text and image tweets for disaster response assessment. International Workshop on Data-driven Resilience Research, Leipzig, Germany. https://dataweek.de/d2r2-22/
    Abstract
    Social media platforms are a vital source of information in times of natural and man-made disasters. People use social media to report updates about injured or dead people, infrastructure damage, missing or found people among other information. Studies show that social media data, if processed timely and effectively, could provide important insight to humanitarian organizations to plan relief activities. However, real-time analysis of social media data using machine learning algorithms poses multiple challenges and requires processing large amounts of labeled data. Multi-modal Twitter Datasets from Natural Disasters (CrisisMMD) is one of the dataset that provide annotations as well as textual and image data to help researchers develop a crisis response system. In this paper, we analyzed multi-modal data from CrisisMMD, related to seven major natural calamities like earthquakes, floods, hurricanes, wildfires, etc., and proposed an effective fusion-based decision making technique to classify social media data into Informative and Non-informative categories. The Informative tweets are then classified into various humanitarian categories such as rescue volunteering or donation efforts, not-humanitarian, infrastructure and utility damage, affected individuals, and other relevant information. The proposed multi-modal fusion methodology outperforms the text tweets-based baseline by 6.98% in the Informative category and 11.2% in the Humanitarian category, while it outperforms image tweets-based baselines by 4.5% in the Informative category and 6.39% in the humanitarian category.
    URI
    https://soar.wichita.edu/handle/10057/23775
    Collections
    • Aaron Bowen

    Browse

    All of Shocker Open Access RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsBy TypeThis CollectionBy Issue DateAuthorsTitlesSubjectsBy Type

    My Account

    LoginRegister

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    DSpace software copyright © 2002-2023  DuraSpace
    DSpace Express is a service operated by 
    Atmire NV