Validation Data for the Seq2Turk Spell Check Model

Published: 29 January 2024| Version 1 | DOI: 10.17632/2tmh2y9z73.1
Contributors:
Burak Aytan, C. Okan Sakar

Description

This is the validation set used to train the seq-to-seq RoBERTa model in our paper. We collected approximately 38 gigabytes of text from various internet sources from Turkish Wikipedia, Turkish OSCAR, and some news sites. From these data, the error-free ones were filtered out, and then data was generated using the functions that produce typos, as mentioned in the paper. In the dataset, the [ORG] token is used as a delimiter. The part written before [ORG] contains the sentence with typos, while the part after [ORG] contains the correctly written version of the sentence.

Files

Institutions

Bahcesehir Universitesi Muhendislik ve Doga Bilimleri Fakultesi

Categories

Turkish Language, Correction

Licence