HumAID: Human-Annotated Disaster Incidents Data from Twitter with Deep Learning Benchmarks

Published: 9 April 2021| Version 1 | DOI: 10.17632/zppcyk2fdt.1
Contributor:
Firoj Alam

Description

Social networks are widely used for information consump- tion and dissemination, especially during time-critical events such as natural disasters. Despite its significantly large vol- ume, social media content is often too noisy for direct use in any application. Therefore, it is important to filter, catego- rize, and concisely summarize the available content to facil- itate effective consumption and decision-making. To address such issues automatic classification systems have been de- veloped using supervised modeling approaches, thanks to the earlier efforts on creating labeled datasets. However, existing datasets are limited in different aspects (e.g., size, contains duplicates) and less suitable to support more advanced and data-hungry deep learning models. In this paper, we present a new large-scale dataset with ∼77K human-labeled tweets, sampled from a pool of ∼24 million tweets across 19 disas- ter events that happened between 2016 and 2019. Moreover, we propose a data collection and sampling pipeline, which is important for social media data sampling for human annota- tion. We report multiclass classification results using classic and deep learning (fastText and transformer) based models to set the ground for future studies. The dataset and associated resources are publicly available: https://crisisnlp.qcri.org/humaid_ dataset.html

Files not available for this dataset

This contains only metadata

Steps to reproduce

https://arxiv.org/submit/3689816 https://crisisnlp.qcri.org/humaid_dataset

Institutions

Qatar Computing Research Institute

Categories

Social Media, Informatics, Classification System, Disaster Response

Licence