In light of a recently published Eye study showing that artificial intelligence (AI) can be used to reduce both undertreatment and overtreatment of neovascular age-related macular degeneration (nAMD), The Ophthalmologist spoke with lead authors, Jeffry Hogg (Moorfields Eye Hospital and University of Birmingham, UK) and Pearse Keane (Moorfields Eye Hospital and UCL Institute of Ophthalmology, UK), to learn more about their research.
Why was it important to include two different OCT systems (Topcon models and Heidelberg Spectralis) for this study?
Jeffry Hogg: If you have trained your AI model on just one type of imaging device and then use the model to analyze images taken on another, it may perform less accurately – because the model picks up subtle differences in the images produced by each device. In this case, images from Topcon were analyzed in DICOM format and images from Heidelberg in TIFF format. Though it didn’t here, this could have caused the AI to perform less well in one device – this is referred to as “domain shift.”
When the first commercial medical AI device was approved in the US by the FDA for diabetic retinopathy detection, the approval was constrained to using the device with only one type of camera – precisely because of this worry about domain shift. That’s why we wanted to ensure our AI was tested on images from two types of devices. Of course, in the real world there are more than two types of imaging device, so the end goal would be to test the model on a variety of devices.
Do you imagine future AI systems operating autonomously, rather than as a decision-making support tool?
Pearse Keane: We cannot underestimate the human factors at play in delivering ophthalmology services and ensuring optimal care, both from the patient’s and clinician’s perspectives. So, for the foreseeable future, we envisage AI systems functioning as a decision-making support tool. Even if AI is initially making an assessment autonomously, in isolation from an ophthalmologist – for example in a remote location where people have no access to local eye care services – that assessment would later be reviewed by a clinician.
We’re also witnessing a growing body of evidence across various medical specialties that when AI works together with humans on decision-making, it is more beneficial than either human or AI alone. In our own research network, we undertook an assessment of a fine-tuned large language model for human-AI clinical reasoning in ophthalmology, in collaboration with colleagues at Google DeepMind. We found that the model’s standalone accuracy matched that of ophthalmologists, but when ophthalmologists worked together with AI, they ranked the correct diagnosis higher, agreed more with one another, and enriched their management plans.
In terms of liability, how would NHS services and regulators balance responsibility between clinicians and AI manufacturers for this kind of system?
Jeffry Hogg: This is still uncertain territory, and there is not much in the way of established case law to help us understand liability implications. For AI as a Medical Device (AIaMD) products that have been approved for use under the supervision of a healthcare professional, not for autonomous use, the clinician can be considered the responsible decision maker and therefore accountable. The scenario becomes more complex in the cases of autonomous tools, where clinicians are further removed from the decision-making process.
In the case of AIaMD for wet AMD, clinicians do have some flexibility in how they interpret thresholds for the amount of subretinal fluid that would indicate whether or not a condition is getting worse. For example, if the AI is providing the threshold numbers and the clinician is deciding how to interpret and act on the information, then responsibility for the care decision lies with the clinician. If an AI system was approved to not only provide the numbers but also make a treatment recommendation, then it could be argued more liability would sit with the AIaMD manufacturer. These considerations will have to be carefully weighed by the NHS and regulators as AI becomes more widely adopted in healthcare settings.
Do you think patients will be comfortable with AI contributing to their treatment decisions?
Pearse Keane: In our engagement with patients, the response has been largely very positive, but of course there are concerns. When we began our collaboration with Google DeepMind in 2015-2016, we did a huge amount of work with patients to build trust around the use of anonymized data for AI training, and this when AI was relatively nascent. We were fortunate to have patient champions, such as the incredible Elaine Manna who sadly passed away earlier this year. Elaine, who had partially lost her sight from wet AMD, was adamant that if AI can help in earlier detection of treatable, sight-threatening conditions, then it should be embraced.
Since establishing the INSIGHT Health Data Research Hub in 2019, we have involved patients and members of the public in deciding who can access data for AI development and assessing whether such projects will truly be for the benefit of patients.
Through this past decade we have come to understand that if patients believe the AI has been developed with safety, health equity and patient benefit as the utmost priority, then they are supportive of AI contributing to treatment decisions.
Patient trust is absolutely critical, which is why we are not just developing AI technologies, but also focusing on all of the associated factors in deploying AI for use with patients. That includes training on diverse datasets, ensuring we are transparent about what data we are using, evaluating AI performance in various settings, across imaging devices and modalities, and working with clinicians so that they feel confident in the AI tools they are using.
How could AI-enabled OCT analysis be incorporated into technology such as teleophthalmology or home monitoring?
Pearse Keane: We are already seeing AI-enabled OCT analysis in teleophthalmology in some underserved areas. For example, we worked with colleagues in Western Australia on developing a mobile screening service for diabetic retinopathy – essentially an AI-enabled OCT imaging device in a van, which travels around to small, isolated communities in the Pilbara region of the outback. This enables instant point-of-care diagnosis and an on-the-spot telehealth appointment with an ophthalmologist for anyone flagged as high risk.
In Bangladesh, a team of eyecare providers is using our RETFound model on an OCT device to bring free AI-powered eye care to the remote island of Char Hasan, which is home to 800 people and has no electricity, no schools, no health centres. For the first time, residents there can have eye checks on an OCT device, reviewed by AI, and, if needed, connected by radio signal to an ophthalmologist on the mainland. We have also successfully tested a lean version of RETFound on a smartphone. So it is not hard to imagine that in the future, similar technology could be routinely used in the home, or in places with no easy access to an ophthalmologist.
What are the next steps in your own research?
Jeffry Hogg: We aim to progress the validation and evaluation of the algorithm for wet AMD testing in additional real-world scenarios – for example, investigating how the AI model could be integrated into clinical workflows, how it might interface with clinicians and addressing their feedback on functionality as well as site-specific variations. We’ll also be exploring further enrichment of the training data, incorporating other imaging modalities and additional datasets from diverse populations.
We also think there may be learnings that we can absorb from other research projects we have underway. Together with collaborators from over 100 research groups, for example, we are building Global RETFound, the first medical AI model with globally representative data, spanning more than 65 countries. It will be trained for detection of wet AMD and other eye conditions and systemic disease. Although we already benefit from imaging data through the INSIGHT hub, representing the ethnic diversity of patients at Moorfields, Global RETFound will provide the opportunity for an even richer understanding of the nuances of how AI detects disease progression in an incredibly diverse world population.