Data for "Prediction of Phakic Intraocular Lens Vault Using Machine Learning of Anterior Segment Optical Coherence Tomography Metrics"

Published: 11-01-2021| Version 2 | DOI: 10.17632/ffn745r57z.2
TaeKeun Yoo


Prediction of Phakic Intraocular Lens Vault Using Machine Learning of Anterior Segment Optical Coherence Tomography Metrics. Authors: Kazutaka Kamiya, MD, PhD, Ik Hee Ryu, MD, MS, Tae Keun Yoo, MD, Jung Sub Kim MD, In Sik Lee, MD, PhD, Jin Kook Kim MD, Wakako Ando CO, Nobuyuki Shoji, MD, PhD, Tomofusa, Yamauchi, MD, PhD, Hitoshi Tabuchi, MD, PhD. We hypothesize that machine learning of preoperative biometric data obtained by the As-OCT may be clinically beneficial for predicting the actual ICL vault. Therefore, we built the machine learning model using Random Forest to predict ICL vault after surgery. This multicenter study comprised one thousand seven hundred forty-five eyes of 1745 consecutive patients (656 men and 1089 women), who underwent EVO ICL implantation (V4c and V5 Visian ICL with KS-AquaPORT) for the correction of moderate to high myopia and myopic astigmatism, and who completed at least a 1-month follow-up, at Kitasato University Hospital (Kanagawa, Japan), or at B&VIIT Eye Center (Seoul, Korea). This data file (RFR_model(feature=12).mat) is the final trained random forest model for MATLAB 2020a. Python version: *************************************************************** from sklearn.model_selection import train_test_split import pandas as pd import numpy as np from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import RandomForestRegressor # connect data in your google drive from google.colab import auth auth.authenticate_user() from google.colab import drive drive.mount('/content/gdrive') # Change the path for the custom data # In this case, we used ICL vault prediction using preop measurement dataset = pd.read_csv('gdrive/My Drive/ICL/data_icl.csv') dataset.head() #optimal features (sorted by importance) : # 1. ICL size 2. ICL power 3. LV 4. CLR 5. ACD 6. ATA # 7. MSE 8.Age 9. Pupil size 10. WTW 11. CCT 12. ACW y = dataset['Vault_1M'] X = dataset.drop(['Vault_1M'], axis = 1) # Split the dataset to train and test data, if necessary. # For example, we can split data to 8:2 as a simple validation test train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.2, random_state=0) # In our study, we already defined the training (B&VIIT Eye Center, n=1455) and test (Kitasato University, n=290) dataset, this code was not necessary to perform our analysis. # Optimal parameter search could be performed in this section parameters = {'bootstrap': True, 'min_samples_leaf': 3, 'n_estimators': 500, 'criterion': 'mae' 'min_samples_split': 10, 'max_features': 'sqrt', 'max_depth': 6, 'max_leaf_nodes': None} RF_model = RandomForestRegressor(**parameters), train_y) RF_predictions = RF_model.predict(test_X) importance = RF_model.feature_importances_


Steps to reproduce

Please see "RandomForestModelSelection.png" and "HowToUse.png" files.