Deep Learning-Based Style Transfer from Ultra-Widefield to Traditional Fundus Photography

Published: 19 February 2020| Version 2 | DOI: 10.17632/m3kg8p8cxf.2
Contributor:
TaeKeun Yoo

Description

This study included 451 anonymized UWF and 745 FP images. The ultra-widefield (UWF) images, which include both normal and pathologic retinal images, were based on Tsukazaki Optos Public Project. The traditional fundus photograph (FP) images were extracted from the publicly accessible database by using the Google image and Google dataset search that included English keywords related to retina. The search strategy was based on the following key terms: “fundus photography”, “retinal image”, and “fundus dataset”. The images were manually reviewed by two board-certified ophthalmologists, and blurred and low-quality images were removed to clarify the image domains. Duplicated images were also removed. Consequently, 451 images with artifacts and 745 images without artifacts were collected. The UWF images were cropped and masked after registration for CycleGAN.

Files