Artificial intelligence (AI) is currently doing for treatment of wet age-related macular degeneration (AMD) what optical coherence tomography (OCT) did for ophthalmic imaging over a decade ago – making invisible disease dynamics visible, and, in doing so, changing how we plan treatment. Rather than arguing about fixed, pro re nata (PRN), or treat-and-extend regimens, thanks to AI we can now let individual disease trajectories drive injection schedules.
The issue is no longer whether anti-VEGF works for AMD, but how to sustain trial-like outcomes in real patients with real lives. Even in well-resourced clinics, many patients drift away from optimal schedules: visits are missed, “extension” goes a step too far, or burden fatigue leads to silent undertreatment. AI systems trained on large volumes of longitudinal OCT and outcome data are exposing the patterns behind these failures. They can identify which eyes are likely to tolerate longer intervals and which are destined to relapse early, well before that pattern is obvious at the slit lamp.
OCT is the natural substrate for this shift. It already underpins clinical trial endpoints and day-to-day decision-making, but it still acts mostly on a handful of heuristic cues: is there intraretinal fluid (IRF) or subretinal fluid (SRF) or has central Subfield Thickness (CST) changed? Is there obvious retinal atrophy? AI allows us to move from eyeballing snapshots to analyzing trajectories. Deep Learning models can integrate subtle textural changes, layer integrity, and fluid dynamics across visits and devices, and translate them directly into clinically framed questions: E.g. “How likely is reactivation if I extend?” “What injection burden should I anticipate over the next year for this eye?”
deepeye® TPS (Therapy Planning Support) is a good illustration of how this is moving from research slides into the lane of routine care. Its proposition is not replacing the clinician’s judgement, but formalizing what experienced retina specialists are already attempting mentally: forecasting disease activity and treatment need over time. By ingesting standard OCT scans and returning biomarker visualizations, disease activity assessments, and a 12-month prognosis, it reframes the consultation from “What shall we do today?” to “What course are we committing this patient to – and does that align with their life?” Crucially, the output is designed to be read in seconds, not studied like a methods section.
If this approach works at scale, it has three important implications. First, adherence stops being a purely behavioral challenge and becomes a planning problem: when patient and clinician see the likely injection journey up front, there is more room to adjust expectations, transfer care, or choose a different regimen before vision is lost. Second, we can start to talk about wet AMD treatment intensity as a quantifiable, AI-derived phenotype – something that can inform both clinical choices and the design of future trials. Third, it opens the door to “drug plus algorithm” offerings, where the therapy is supported by an approved, continuously learning tool that helps maintain outcomes in the messiness of real-world practice.
There are, of course, caveats to this type of approach. Algorithmic recommendations must be explainable enough to be challenged, and not simply treated as incontestable. Validation needs to extend beyond single-vendor, single-centre datasets, particularly if we expect tools to cope with community-grade OCT and diverse patient populations. Doctors should be honest that some patients will still choose fewer injections than their predicted need, regardless of what the algorithm is advising.
But the direction of travel is clear. For wet AMD, the next frontier is not another marginal gain in efficacy; it is closing the gap between what our drugs can achieve and our patients’ actual experience. AI-driven therapy planning systems like deepeye® TPS will not solve that problem alone, but they may finally give us a language – and a set of numbers – to tackle it head-on.