Event Extraction Andersen's Fairy Tales

Published: 27 May 2025| Version 2 | DOI: 10.17632/22v3kcgks3.2
Contributor:
Erna Daniati

Description

The data collection process consisted of compiling texts from three main online platforms: Project Gutenberg (https://www.gutenberg.org/cache/epub/1597/pg1597-images.html ), Andersen Stories (https://www.andersenstories.com/en/andersen_fairy-tales/list ), and the HCA Gilead website (http://hca.gilead.org.il/ ). These resources offered digitized editions of Hans Christian Andersen’s fairy tales, including multiple translations and versions. The texts were gathered using web scraping and text extraction methods. Python tools such as BeautifulSoup were utilized to extract content from HTML, while Pandas was used for organizing and cleaning the collected data. The selection criteria emphasized the relevance to Andersen’s original stories, availability in English, and the completeness of the texts. To normalize the data, HTML tags were removed, titles were made consistent, and the texts were divided into sentences or organized into structured formats based on events (Subject-Verb-Object-Temporal-Location). The datasets are freely available and legally sourced from open-domain websites that offer unrestricted access to literary works, ensuring adherence to copyright and usage regulations.

Files

Steps to reproduce

The following steps outline the process used to develop the Andersen Fairy Tales Dataset for Narrative Analysis Based on Event Extraction Approach . These steps can be replicated by researchers to validate or extend the dataset. 1. Data Collection from Public Sources Sources : Texts were collected from three open-access websites: Project Gutenberg Andersen Stories Gilead HCA Archive Only English translations of Andersen’s original fairy tales were selected. Inclusion criteria: textual completeness, readability, and relevance to Andersen’s core works. 2. Text Preprocessing Web Scraping : HTML content was extracted using Python with BeautifulSoup. Cleaning : Irrelevant sections (e.g., footers, license texts, metadata) were removed. Normalization : Standardized titles across versions. Removed special characters and inconsistent punctuation. Converted all text to UTF-8 encoding. 3. Sentence Segmentation Manual correction was applied where necessary due to poetic syntax or punctuation inconsistencies. 4. Event Extraction Each sentence was parsed using spaCy to extract Subject-Verb-Object-Temporal-Location (SVO-TL) triplets. Dependency parsing was used to identify grammatical relationships between words. 5. Dataset Structuring Extracted events were stored in a structured format: sentences_data.csv: Contains one sentence per row, along with story title. extracted_events.csv: Contains SVO-TL triplets with metadata. 6. Data Validation A subset of the dataset was manually validated by two annotators to ensure accuracy of extracted events. Inter-Annotator Agreement (IAA) was calculated using Cohen's Kappa (κ > 0.8). 7. Public Data Release Final datasets (raw_stories.txt, sentences_data.csv, extracted_events.csv, events_data.json) were uploaded to an open repository (e.g., Mendeley Data, Zenodo). DOI or persistent identifier was assigned for citation purposes.

Institutions

Universitas Negeri Malang

Categories

Natural Language Processing, Text Extraction

Licence