You are in:Home/Publications/Diabetic Retinopathy Fundus Image Classification and Lesions Localization System Using Deep Learning

Dr. Wafaa Mohib Mohamed Abd-El Hamed Shalash :: Publications:

Title:
Diabetic Retinopathy Fundus Image Classification and Lesions Localization System Using Deep Learning
Authors: Wejdan L. Alyoubi 1,* , Maysoon F. Abulkhair 1 and Wafaa M. Shalash 1,
Year: 2023
Keywords: computer-aided diagnosis; convolutional neural networks; deep learning; diabetic retinopathy; diabetic retinopathy classification; diabetic retinopathy lesions localization; YOLO
Journal: Sensors
Volume: 21
Issue: 11
Pages: 3704
Publisher: MDPI
Local/International: International
Paper Link:
Full paper Wafaa Mohib Mohamed Abd-El Hamed Shalash_Diabetic retinopathy fundus image classification and lesions localization system using deep learning.pdf
Supplementary materials Not Available
Abstract:

Diabetic retinopathy (DR) is a disease resulting from diabetes complications, causing non-reversible damage to retina blood vessels. DR is a leading cause of blindness if not detected early. The currently available DR treatments are limited to stopping or delaying the deterioration of sight, highlighting the importance of regular scanning using high-efficiency computer-based systems to diagnose cases early. The current work presented fully automatic diagnosis systems that exceed manual techniques to avoid misdiagnosis, reducing time, effort and cost. The proposed system classifies DR images into five stages—no-DR, mild, moderate, severe and proliferative DR—as well as localizing the affected lesions on retain surface. The system comprises two deep learning-based models. The first model (CNN512) used the whole image as an input to the CNN model to classify it into one of the five DR stages. It achieved an accuracy of 88.6% and 84.1% on the DDR and the APTOS Kaggle 2019 public datasets, respectively, compared to the state-of-the-art results. Simultaneously, the second model used an adopted YOLOv3 model to detect and localize the DR lesions, achieving a 0.216 mAP in lesion localization on the DDR dataset, which improves the current state-of-the-art results. Finally, both of the proposed structures, CNN512 and YOLOv3, were fused to classify DR images and localize DR lesions, obtaining an accuracy of 89% with 89% sensitivity, 97.3 specificity and that exceeds the current state-of-the-art results.

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus