On a Quest to Find the Holy Grail of Imaging
A collaboration between Moorfields and Google DeepMind hopes to automate OCT analysis using the power of AI
Mark Hillen |
At a Glance
- You may have heard of Google’s DeepMind division – which has artificial intelligence (AI) algorithms that can beat not only Atari videogames from the 1980s, but also the world’s best player of the world’s most complex board game, Go
- But their AI platform has health applications too, which inspired Moorfields Consultant Ophthalmologist, Pearse Keane, to assess its potential to automate and transform retinal image analysis – and disease diagnostics
- Moorfields is now sharing 1 million fundus photograph and OCT images with DeepMind, who will train its AI algorithms to detect even the earliest signs of disease pathology
- Manual retinal image analysis today requires highly-trained, experienced specialists, and takes time. AI could help both speed up the process, and prioritize the patients who need review and treatment the earliest
Google’s DeepMind division is famous for its artificial intelligence (AI) systems, which not only can play Atari VCS 2600 video games like a boss (1), but can even beat the world’s best (human) player, Lee Se-dol, of the Chinese board game, Go (2). These achievements aren’t like those of IBM’s computer, Deep Blue, though, which beat grandmaster Garry Kasparaov back in 1997. Back then, IBM programmed Deep Blue with one objective: to win at chess. It held the advantage of having a database of 700,000 grandmaster games to refer to, and a fearsome amount of computing power (in its day) to work out the consequences of each move it could play.
DeepMind’s approach is different. Rather than being pre-programmed to do anything, what’s been developed is a general-purpose, pattern-interpreting algorithm, that just… “learns.” Sure, it needs multiple iterations before it can become good at a task. But it keeps getting better, and the approach seems to work. After all, it has just beaten the world’s best player at the world’s most complex board game.
But DeepMind’s next move isn’t more gameplay: it’s a collaboration with London’s Moorfields Eye Hospital. Moorfields is sharing a set of 1 million historical (and anonymized) eye scans with DeepMind, plus information about disease pathology and how it was managed. The objective is to develop an AI system that can spot disease pathologies earlier and ensure timelier patient treatment – and ultimately to help to avoid cases of preventable eye disease.
Pearse Keane, a Consultant Ophthalmologist at Moorfields, initiated the collaboration. Below is his story of how Moorfields and DeepMind got working together, what they’re currently working on, and what’s next.
How does one initiate a collaboration with Google DeepMind?
Throughout my research career I’ve had a surprising amount of luck just by sending emails to people about projects that I’m interested in. I’d seen YouTube videos on the work that the DeepMind team was doing to train their AI systems to play some of the classic Atari 2600 console games like Breakout and Pong, and by the time I’d read their Nature paper on it, I was already thinking of potential collaborations. Then in June of last year, I read an article on DeepMind in Wired magazine (3), which mentioned that Mustafa Suleyman – one of DeepMind’s cofounders and its Head of Applied AI – was interested in using AI for healthcare. I had my lightbulb moment. I sent him a message on LinkedIn and set out my stall. Thrillingly, he replied within a day or so, and within a few days I was meeting him for coffee to get the project underway. That was in July 2015, and here we are a year later.
Did you consider any other companies, or developing AI in-house?
I’ve always been interested in AI, and I’ve been following all the advances in deep learning for the past two or three years, so DeepMind immediately caught my interest. If they weren’t interested I would have probably approached some other people. I certainly know that I have colleagues at Moorfields who may be looking to collaborate with others in the AI space in the future.
Does it help that both you and DeepMind are based in London?
One of the things that really excited me about this project was how many things seemed to just click. I read that DeepMind is based in King’s Cross – just 15 minutes from Moorfields. Two of the founders, Demis Hassabis and Mustafa Suleyman, are from London themselves, and have discussed the importance of DeepMind being based here. This seemed like the perfect opportunity.
The arrangement is that you retain the data, and Google retain their AI property. Can you work with others using the same data in the future?
Yes. This is not an exclusive arrangement, and I personally, and Moorfields in general, have the capacity to work with others as well.
How well protected is the data you’re using?
We’ve actually published the protocol for our research in an open-access peer reviewed journal (4), and that contains all of the details of how we stripped out the metadata from the OCT images. Before any data goes to DeepMind, it’s scrutinized by the IT department at Moorfields, and checked to ensure it doesn’t contain any patient identifiable data, and then it has to be signed off by the information governance team at Moorfields to make sure it’s completely anonymized.
Then, the anonymized data goes to a health cluster – the DeepMind team own dedicated UK servers on which the data is stored. The servers are graded at level three by the National Health Service (NHS) Health & Social Care Information Center, which is actually a higher level than a lot of NHS trusts in terms of its security. Additionally, none of the data can be transported outside of the UK, and it’s not linked to any other data sets – even people working for DeepMind are unable to access this data from outside the UK. Another advantage of us both being based in London.
What’s it like working with a Google company?
I have nothing but positive things to say about the DeepMind team. Mustafa Suleyman is very focused on what benefits this can bring to the NHS, and the team aim to do things that will provide the best results for patients – they’re really interested in tackling big problems in healthcare. They’ve been incredibly diligent about issues related to information privacy. I’ve really been blown away by how incredibly seriously they take that side of things, and I think they’re in this for the right reasons. I have collaborated with other people outside the hospital setting before, and I think this has been one of the best, if not the best, of all my external collaborations.
The AI can surely come to better conclusions if it knows the patient’s history, demographics, treatment history and so on – how much data will be used?
We’ve got two strands of approach to this. The first work is on the anonymized data, which will provide more limited information. For example, we will provide the diagnosis (say, AMD), the patient’s age, and this will be linked to a certain OCT scan, and DeepMind’s algorithms will get to work – but there will be no patient identifiable information. This is what we’re doing our initial analysis from.
We’re also planning research on pseudonymized data as well. This will include additional labels, and in particular will involve longitudinal image sets. We’ll be able to see if a patient has had OCT scans performed at multiple visits, meaning that we can then track the progression of the disease. We have taken extra measures to be very confident that this won’t allow you to identify any of the patients. We’ve got ethical approval for our pseudonymized data set, but it’s still pending UK Health Research Authority approvals.
Is the goal for DeepMind to act as a retinal image grading and triage system?
I think that that’s probably a little further down the road. In the short-term, we’d like to publish our work in peer-review journals, and hopefully produce an algorithm that can very accurately detect say, AMD or diabetic retinopathy, at an early stage. At the moment we’re really just concerned with the question, “Can deep learning be used to diagnose (for example) wet AMD?”
If the answer is yes, there is a lot of potential, such as using it in a community setting to identify people who need to be seen urgently. You’ll probably have seen the article that Carrie MacEwen – the President of the Royal College of Ophthalmologists – wrote for BBC News recently (5). We have a situation in the UK today, where hospital eye clinic outpatient appointments constitute about nine percent of all clinical hospital appointments across the entire NHS. People simply cannot get an appointment in a timely fashion, because of the sheer volume of patients that we need to see. There are people losing sight because they can’t get seen urgently.
One of the limiting factors right now is that if you have an OCT machine in the community, very often, the person who takes these OCT scans doesn’t have the specialized training and experience required to interpret them. What this means is that if there’s any uncertainty, the patient gets referred to a retina specialist – even if it turns out that they’re perfectly healthy.
Ophthalmology is so subspecialized these days that even if a corneal specialist performs a retinal OCT, they may not have much experience diagnosing retinal conditions, and hence might have to refer that patient on.
I think AI definitely has the capacity to try and learn to identify those patients that truly need to be prioritized, allowing us to give the patients without sight-threatening diseases more routine appointments.
Who’s the “competition” in the research field?
I wouldn’t use the word “competition.” In the AI community in general, not just in a healthcare-related setting, I get the impression that there is a real spirit of making everything open source and collaborating together. That really fits well with my own academic ethos. There are other people doing exciting things in this area, and I’m open to collaborating with anyone.
Do you hope the AI can eventually move beyond diagnostics?
I think in the medium- to long-term we definitely hope that it will provide new scientific insight. In the short-term we want to see if we can get accurate diagnoses, and then we’d like to see if we can get information about disease prognosis and pathophysiology – for example, what is the risk of converting from dry to wet AMD, and what timeline might this occur on?
One of the really nice things about this collaboration is that DeepMind wants the research to be clinician-led. We don’t just give them data and leave them alone to come up with something; it’s a two-way process. They’re always looking for guidance on what features we’re interested in, what clinical problems will have the most patient benefit if solved, and so on.
I think what would be really interesting as a research question, is whether deep learning could pick up features on the OCT scan that we as humans are oblivious to, even with specialist training. I know that deep learning has been applied to things like breast cancer histopathology, and was able to pick up new features on the microscope slide that correlate with five-year survival of patients with breast cancer. To do something similar would be the Holy Grail in terms of OCT retinal imaging research.
- V Mnih et al., “Human-level control through deep reinforcement learning”, Nature, 518, 529–533 (2015). PMID: 25719670.
- D Silver et al., “Mastering the game of Go with deep neural networks and tree search”, Nature, 529, 484–489 (2016). PMID: 26819042.
- D Rowan, “DeepMind: inside Google’s super-brain”, Wired UK (2015). Available at: bit.ly/deepmindwired. Accessed July 14, 2016.
- J De Fauw et al., “Automated analysis of retinal imaging using machine learning techniques for computer vision [version 1; referees: awaiting peer review]”, F1000, 5, 1573 (2016). Available at: bit.ly/moorfieldsdeepmind. Accessed July 14, 2016.
- C MacEwen, "Eye risk from ‘overstretched NHS’ ”, BBC News (2016). Available at: bit.ly/bbccarrie. Accessed July 14, 2016.