Automated learning of glaucomatous visual fields from OCT images using a comprehensive, segmentation-free 3D convolutional neural network model
Abstract A segmentation-free 3D Convolutional Neural Network (3DCNN) model was adopted to estimate Visual Field (VF) in glaucoma cases using Optical Coherence Tomography (OCT) images. This study, conducted at a university hospital, included 6335 participants (12,325 eyes). Two models were trained, o...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-04-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-98511-0 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850181164259606528 |
|---|---|
| author | Makoto Koyama Yuta Ueno Yoshikazu Ito Tetsuro Oshika Masaki Tanito |
| author_facet | Makoto Koyama Yuta Ueno Yoshikazu Ito Tetsuro Oshika Masaki Tanito |
| author_sort | Makoto Koyama |
| collection | DOAJ |
| description | Abstract A segmentation-free 3D Convolutional Neural Network (3DCNN) model was adopted to estimate Visual Field (VF) in glaucoma cases using Optical Coherence Tomography (OCT) images. This study, conducted at a university hospital, included 6335 participants (12,325 eyes). Two models were trained, one on the Glaucoma-Specific Training Group (GTG) and one on the Comprehensive Training Group (CTG) that included various ocular conditions without manual preselection. The CTG showed significantly better performance than the GTG in estimating VF thresholds and Mean Deviation (MD) for both Humphrey Field Analyzer (HFA) 24-2 and HFA10-2 test patterns (p < 0.001). Strong correlations were observed between the estimated and actual VF thresholds for HFA24-2 (Pearson’s r: 0.878) and HFA10-2 (r: 0.903), as well as MD for HFA24-2 (r: 0.911) and HFA10-2 (r: 0.944) in the CTG. The CTG demonstrated lower estimation errors than the GTG and smaller errors in severe cases. The model’s performance remained relatively stable even in advanced glaucoma cases. The model’s ability to learn from a comprehensive dataset without human annotation highlights its potential for large-scale training in the future, potentially improving glaucoma assessment and monitoring in clinical practice. Further validation in external datasets and exploration in different clinical settings are warranted. |
| format | Article |
| id | doaj-art-d1ea22e9146f4ab19006a1f1c51eefbb |
| institution | OA Journals |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-04-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-d1ea22e9146f4ab19006a1f1c51eefbb2025-08-20T02:17:57ZengNature PortfolioScientific Reports2045-23222025-04-0115111110.1038/s41598-025-98511-0Automated learning of glaucomatous visual fields from OCT images using a comprehensive, segmentation-free 3D convolutional neural network modelMakoto Koyama0Yuta Ueno1Yoshikazu Ito2Tetsuro Oshika3Masaki Tanito4Minamikoyasu Eye ClinicIto Eye ClinicIto Eye ClinicDepartment of Ophthalmology, Faculty of Medicine, University of TsukubaDepartment of Ophthalmology, Shimane University Faculty of MedicineAbstract A segmentation-free 3D Convolutional Neural Network (3DCNN) model was adopted to estimate Visual Field (VF) in glaucoma cases using Optical Coherence Tomography (OCT) images. This study, conducted at a university hospital, included 6335 participants (12,325 eyes). Two models were trained, one on the Glaucoma-Specific Training Group (GTG) and one on the Comprehensive Training Group (CTG) that included various ocular conditions without manual preselection. The CTG showed significantly better performance than the GTG in estimating VF thresholds and Mean Deviation (MD) for both Humphrey Field Analyzer (HFA) 24-2 and HFA10-2 test patterns (p < 0.001). Strong correlations were observed between the estimated and actual VF thresholds for HFA24-2 (Pearson’s r: 0.878) and HFA10-2 (r: 0.903), as well as MD for HFA24-2 (r: 0.911) and HFA10-2 (r: 0.944) in the CTG. The CTG demonstrated lower estimation errors than the GTG and smaller errors in severe cases. The model’s performance remained relatively stable even in advanced glaucoma cases. The model’s ability to learn from a comprehensive dataset without human annotation highlights its potential for large-scale training in the future, potentially improving glaucoma assessment and monitoring in clinical practice. Further validation in external datasets and exploration in different clinical settings are warranted.https://doi.org/10.1038/s41598-025-98511-0 |
| spellingShingle | Makoto Koyama Yuta Ueno Yoshikazu Ito Tetsuro Oshika Masaki Tanito Automated learning of glaucomatous visual fields from OCT images using a comprehensive, segmentation-free 3D convolutional neural network model Scientific Reports |
| title | Automated learning of glaucomatous visual fields from OCT images using a comprehensive, segmentation-free 3D convolutional neural network model |
| title_full | Automated learning of glaucomatous visual fields from OCT images using a comprehensive, segmentation-free 3D convolutional neural network model |
| title_fullStr | Automated learning of glaucomatous visual fields from OCT images using a comprehensive, segmentation-free 3D convolutional neural network model |
| title_full_unstemmed | Automated learning of glaucomatous visual fields from OCT images using a comprehensive, segmentation-free 3D convolutional neural network model |
| title_short | Automated learning of glaucomatous visual fields from OCT images using a comprehensive, segmentation-free 3D convolutional neural network model |
| title_sort | automated learning of glaucomatous visual fields from oct images using a comprehensive segmentation free 3d convolutional neural network model |
| url | https://doi.org/10.1038/s41598-025-98511-0 |
| work_keys_str_mv | AT makotokoyama automatedlearningofglaucomatousvisualfieldsfromoctimagesusingacomprehensivesegmentationfree3dconvolutionalneuralnetworkmodel AT yutaueno automatedlearningofglaucomatousvisualfieldsfromoctimagesusingacomprehensivesegmentationfree3dconvolutionalneuralnetworkmodel AT yoshikazuito automatedlearningofglaucomatousvisualfieldsfromoctimagesusingacomprehensivesegmentationfree3dconvolutionalneuralnetworkmodel AT tetsurooshika automatedlearningofglaucomatousvisualfieldsfromoctimagesusingacomprehensivesegmentationfree3dconvolutionalneuralnetworkmodel AT masakitanito automatedlearningofglaucomatousvisualfieldsfromoctimagesusingacomprehensivesegmentationfree3dconvolutionalneuralnetworkmodel |