Subscribe to Newsletter
Subspecialties Imaging & Diagnostics, Basic & Translational Research, Retina

Raising the RD Screening Bar

Although previous studies have shown that deep learning (DL) models can detect retinal disease (RD) in ultra-wide field images, most such models are typically trained to detect only a single disease, limiting the scope of their application.

Now, a research team from Edinburgh University have developed a more advanced DL model that is able to detect several different RDs under realistic conditions – without excluding images that exhibit multiple diseases, artifacts, borderline cases, or healthy cases that lead to artificially balanced data (1). In this interview, Justin Engelmann, one of the lead researchers, delves deeper into what the model could mean for the future of retinal disease screening.

What have you achieved – and why?
 

In short, we have developed a DL model – trained on thousands of images – that can automatically detect the presence of retinal disease in Optos ultra-widefield (UWF) images. By paying attention to the specific parts of an image the model focuses on when making its predictions, ophthalmologists can establish key areas of interest in the retina. This model has the potential to become an AI tool for the automated screening of images or for clinical decision support.

As for why... Sophisticated imaging equipment is becoming increasingly widespread, including at primary care institutions, such as opticians. But it is easier to miss things on an UWF image than it would be on an ordinary color fundus image because of their scale. Thus, it would be valuable to have a tool that can automatically assess images in settings where no qualified clinician is available – or to support clinicians by providing additional information and drawing their attention to regions of interest.

How does your work build on previous research?
 

Our work especially builds on the pioneering efforts of Hitoshi Tabuchi and his colleagues at Tsukazaki hospital (2). They showed that deep learning is effective in this application and kindly shared their dataset with the scientific community, helping to inform our work. Despite this research, there were still some limitations to overcome.

Firstly, each deep learning model was only trained to distinguish between healthy retinas and those having a specific disease (for example, diabetic retinopathy) but had not seen other diseases during training. Such models might get confused when encountering a disease that they were not trained on because it would be neither a healthy retina nor a case of a specific disease. Furthermore, previous studies were selective in the data they used and discarded images of poor quality, borderline cases, other diseases, or retinas with more than one disease.

In our work, we developed a single model that can recognize seven different diseases as well as provide a prediction for whether the retina is healthy or not. Our model accounts for retinas with more than one disease, so it is more practical than others readily available. We also made the conscious decision to not remove any images from the dataset to evaluate the performance of our model under more realistic conditions. After all, poor quality images, borderline cases, and co-pathologies are very common clinical realities.

What are the main take home points from your recent paper?
 

We found that our model can distinguish between healthy and diseased retinas and even identify the specific disease(s) with very high accuracy, irrespective of poorer quality images.

We used data-driven methods to investigate which regions of the retina our model is focusing on for specific conditions. We found that our model had learned to focus on areas where we would expect pathology to occur; for Glaucoma, the model focused on the optic disc; for age-related macular degeneration, the focus was on the macula, and so on. As one might expect, the posterior pole was the most significant area, but we identified this in a purely data-driven way.

Finally, we were interested in understanding for which conditions the increased field of view of the UWF images was beneficial. To our surprise, we found that just using the 10 percent of the image that highlighted the posterior pole was sufficient for comparable – though slightly worse – accuracy compared with using the whole image.

What do your positive results mean for the detection of retinal disease?
 

Our work suggests that automated detection of retinal disease in UWF images is feasible. The finding that the posterior pole alone was sufficient for similar performance to having the whole UWF image available raises some interesting questions about the value of the retinal periphery in automated disease detection. Maybe for this specific application, UWF images do not provide a substantial benefit over ordinary color fundus photography. However, our findings could be specific to the population of our dataset.

What next?
 

In the future, we plan to develop our model into a practical AI tool for automated screening and clinical decision support. We would then like to trial our tool in concrete applications, especially telemedicine in rural areas. Finally, we hope to investigate the question of the value of the retinal periphery for automated disease detection in more detail and in other populations.

Receive content, products, events as well as relevant industry updates from The Ophthalmologist and its sponsors.

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. J Engelmann et al., “Detecting multiple retinal diseases in ultra-widefield fundus imaging and data-driven identification of informative regions with deep learning,” Nature Machine Intelligence, 4, 1143 (2022).
  2. T Nagasawa et al., “Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy,” Int Ophthalmol, 39, 2153 (2019). PMID: 30798455.
About the Author
Sarah Healey

Communicating stories in a way that is accessible to all was one of the focal points of my Creative Writing degree. Although writing magical realism is a fun endeavor (and one I still dabble in), getting to the heart of human stories has always been the driving motivator behind my writing. At Texere, I am able to connect with the people behind scientific breakthroughs and share their stories in a way that is impactful and engaging.

Related Case Studies
Business & Profession Glaucoma
TrabEx Pro: The Next Level in MIGS

| Contributed by MST

The Missing Piece of the Dry Eye Puzzle

| Contributed by Quidel

Uncovering Ocular Comorbidity

| Contributed by Quidel

Related Product Profiles
Uncover the Unique DNA of SPECTRALIS®

| Contributed by Heidelberg Engineering

Subspecialties Cornea / Ocular Surface
Tear Osmolarity – Empowering. Established. Essential.

| Contributed by TearLab

Subspecialties Retina
ForeseeHome® – remote monitoring to help detect wet AMD earlier and improve outcomes

| Contributed by Notal Vision

Product Profiles

Access our product directory to see the latest products and services from our industry partners

Here
Register to The Ophthalmologist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Ophthalmologist magazine

Register

Disclaimer

The Ophthalmologist website is intended solely for the eyes of healthcare professionals. Please confirm below: