A CycleGAN deep learning technique for artifact reduction in fundus photography
Description of this data
Herein, we present a deep learning technique to remove artifacts automatically in fundus photograph. By using a CycleGAN model, we synthesize the retinal images with artifact reduction based on low-quality image, and validated this technique in the independent test dataset.
This study included total 2,206 anonymized retinal images. We collected the fundus photographs without qualification, which include normal and pathologic retinal images. Images including both photograph with and without artifacts were crawled from Google image and dataset search using English keywords related to retina. The search strategy was based on the key terms “fundus photography”, “retinal image”, “artifact”, “quality assessment”, “retinal image grade”, “diabetic retinopathy”, “age-related macular degeneration”, “glaucoma”, “cataract”, and “fundus dataset”. Images with artifact were manually classified by authors. Finally, 1,146 images with artifacts and 1,060 images without artifacts were collected. The experiment process complied with the Declaration of Helsinki. This study did not require ethics committee approval; instead, researchers used open web-based and deidentified data.
We used the CoLaboratory’s CycleGAN tutorial page to develop and to validate CycleGAN model, and all codes were available in the webpage (https://www.tensorflow.org/tutorials/generative/cyclegan).
**This dataset may include MESSIDOR, HRF, FIRE, DRIVE, Kaggle DMR, and freely available images from Google image search. Images with and without artifacts were categorized to investigate artifact reduction.
Experiment data files
Cite this dataset
Yoo, Tae Keun (2020), “A CycleGAN deep learning technique for artifact reduction in fundus photography”, Mendeley Data, v1 http://dx.doi.org/10.17632/dh2x8v6nf8.1