Cookies

Like most websites The Ophthalmologist uses cookies. In order to deliver a personalized, responsive service and to improve the site, we remember and store information about how you use it. Learn more.
Subscribe to Newsletter
Subspecialties Cataract

The Matchmaker

At a Glance

  • In the UK, cataract surgery is often performed by trainees – with complication rates two- to three-fold higher than consultants
  • We have devised a cataract surgery scoring system to stratify patients according to risk, categorize surgeons by experience, and match patients to surgeons accordingly
  • Data from over 8,000 cases shows that our system removed the association between case complexity and posterior capsule rupture, and almost completely eliminated outcome differences between trainees and consultants
  • Our system reduces cataract surgery complications, assists practice management and ensures compliance with recent NICE guidelines. We believe the concept is also applicable to other surgical procedures.

Cataract surgery is very safe – but it could be even safer. A third of all cataract procedures in the UK are performed by trainees; junior trainees have posterior capsule rupture (PCR) rates of 3.2–5.1 percent (1)(2)(3), which compares poorly with the overall PCR rate of 1.9 percent. Intuitively, we all know we could improve this situation by protecting less experienced trainees from more difficult cases. But how can we do this in practice? In my clinic, we have been matching surgeons to cases on a rational basis for nine years – and our retrospective data analysis (4) shows a significant reduction in complications.

We haven’t eliminated the learning curve in cataract surgery, but we have controlled its impact on patients.
Mix and match

Like much good research, our work has been low tech – but high concept and high impact. In brief, our study builds on a 2009 report (5), which used data from around 56,000 cataract operations to calculate the odds ratio of a surgical complication arising in a given case. Though useful work, it wasn’t very user-friendly – it required a complicated program to calculate the complication probability. I wanted to make it easier for clinics to assess the risk of a given case: what we needed, I thought, was a system of classifying both patients and surgeons, and matching them accordingly (see "Examples of cataract surgery and complexity scores").

The first step was to sit down with a group of surgeons, discuss the various case presentations and risk factors, and assign each factor a risk value. We based our approach on the 11 PCR risk factors identified by the cataract national database (CND); we also incorporated important patient-specific factors (absent in the CND dataset) that suggested an experienced surgeon would be more appropriate than a trainee – for example, patients with corneal edema or only one eye. It took some thought, but eventually we identified 16 risk factors, and assigned them values that fairly represented our own surgical experience.

Next, we needed a sensible way of grouping surgeons according to experience. Bearing in mind that, in the UK, surgeons who have performed 350 or more cataract procedures are deemed competent, we created five skill categories based on procedures performed: 0–50, 50–100, 100–250, 250–350, and 350 or more.

Then, we had to develop a rational way of matching patient scores to surgeon skill and experience. Obviously, a Category 1 surgeon needs to be given the easiest cases, while the most complex cases should be passed to Category 5 individuals – surgeons who’ve done 350 or more cataract procedures. But where should the risk cut-off points be for each intermediate category? My feeling was that surgeons should reach the stage of ‘unconscious competence’ – where they can perform effectively without having to think about it – before they move on to the next level. Therefore, we arranged cut-offs with the intent that surgeons don’t move up to the trickier levels until they are really competent at the previous level.

Finally, as a test, we checked the scoring system against historical cases where a trainee-operated patient had developed complications. It was so exciting to see that in about 20 percent of cases that had a PCR or other complication during surgery, the trainee had been attempting cases that, according to our scoring system, were beyond their competence. It looked certain that we were onto a winner.

Examples of cataract surgery and complexity scores (4)

Factors associated with PCR

  • Male, assigned score: 1
  • Age 80–90 years, assigned score: 2
  • Dilated pupil ≤4.0 mm, assigned score: 5
  • White cataract/ no fundal view, assigned score: 8
  • Patient specific score
  • Significant hearing impairment, assigned score: 2
  • Pachymetry ≥600 μm2, assigned score: 5
  • Permanent VA (other eye) 6/36 or worse, assigned score: 8

Complexity score and trainee recommendations

  • Group 1 complexity: total assigned score of 0–1, for trainees with a minimum of 0–50 cases
  • Group 5 complexity: total assigned score of ≥10, for trainees with a minimum of 351 cases
Real-world advantages

The theory was sound – but what about in practice? As you can imagine, it took a bit of time to introduce it into a multi-disciplinary National Health Service (NHS) department and get everybody on board. But to everyone’s credit it was adopted successfully. One reason for that success is the simplicity of the system; the doctor marks a few things on the patient’s score card, the nurse adds additional information, such as biometry, and double-checks the information. The result is a total patient score.

We found that our system generated many unforeseen – but welcome – knock-on effects. First, the trainees were much more comfortable knowing their cases were more appropriate to their level. Second, it helped with scheduling: the trainees and the secretary would organize the list so that there were always cases of an appropriate level for the trainee, and the consultants picked up the rest. Third, the system helps us avoid situations where a case list is over-burdened with difficult cases and leads to surgeries over-running (and when that is unavoidable, the complexity scores justify the time taken, defusing discussions with clinic managers). Fourth, when patients are reluctant for a junior doctor to operate on them, we can reassure them that under our scoring system, trainees only operate within their level of competence. And fifth, it alerts clinicians to complex cases where the patient should, as part of the informed consent obligation, be informed of particular procedure risks.

The impact has been very positive in our department. And now that people have understood its advantages, it is part of the process. In fact, we’ve come to rely on it very heavily; it’s considered bad practice when a patient score is missing.

More compatibility, less complication

We’ve been running this scoring and allocation system in our department for almost a decade now. But analyzing and quantifying its effect was challenging because the scoring system and outcomes data were stored in different databases. I was very fortunate to be joined by a trainee called Paul Nderitu, who worked on merging the two databases; from six years of operation (January 1, 2011, to December 31, 2016) and 11,468 cases, he extracted complexity data on 8,200 cases. The results of the analysis were better than I’d hoped for: our system had practically eliminated the variation between trainees and consultants in terms of complications and patient outcomes! By rationally allocating cases to appropriate surgeons, we ensured that our patients were subjected to similar chances of complications, and equivalent outcomes, regardless of the surgeon’s experience. In other words, the risk of complications is almost the same regardless of whether a trainee or a consultant is operating – simply because we match case difficulty to surgeon competence. It puts me in the happy position of being able to assure patients that they will be allocated a surgeon of a level suitable for their specific needs, and that the outcome won’t change – regardless of the individual surgeon.

Good matching

I don’t know if others are using our system, but plenty of people have asked me about it over the years. I’d expect increased interest – the NICE guideline on cataract surgery in the UK, which came out earlier this year, recommends that all cataract surgery units use some form of scoring system, and ours is by far the largest and most validated of the available cataract scoring systems. Remember, it was not only derived from validated national cataract data, but also validated with 8,000 of our own cases, which provided strong evidence that our system is effective. If you want to improve outcomes (and comply with NICE guidelines if you are in the UK), you could do far worse than to adopt our system!

I believe we’ve produced something incredibly powerful. Our aim was to minimize complications, optimize outcomes, and maximize patient safety, and I think we have done all of those things. Our system rationally matches patients with surgeons in a way that is good for both. It is ideal for cataract surgery, because the procedure doesn’t vary much – but I’m sure the concept could be applied in other types of ophthalmic surgery – or other medical fields, such as orthopedics. To that end, I’d like to explore ways of publicising our work in other surgical disciplines. I’m very proud of what we’ve done; we haven’t eliminated the learning curve in cataract surgery, but we have controlled its impact on patients. And that is a wonderful thing!

Paul Ursell is a consultant ophthalmologist at Epsom & St Helier University NHS Trust, Surrey, UK.

Receive content, products, events as well as relevant industry updates from The Ophthalmologist and its sponsors.

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. JM Sparrow et al., “The cataract national data set electronic multi-centre audit of 55 567 operations: case-mix adjusted surgeon’s outcomes for posterior capsule rupture”, Eye (Lond), 25, 1010–1015 (2011). PMID: 21546922.
  2. P Jaycock et al., “The Cataract National Dataset electronic multicentre audit of 55 567 operations: Updating benchmark standards of care in the United Kingdom and internationally”, 23, 38–49 (2009). PMID: 18034196.
  3. AC Day et al., “The Royal College ofOphthalmologists’ National Ophthalmology Database study of cataract surgery: report 1, visual outcomes and complications”, Eye (Lond), 29, 552–560 (2015). PMID: 25679413.
  4. P Nderitu and P Ursell, “Updated cataract surgery complexity stratification score for trainee ophthalmic surgeons”, J Cataract Refract Surg, 44, 709-717 (2018). PMID: 30041740.
  5. N Narendran, et al., “The Cataract National Dataset electronic multicentre audit of 55,567 operations: Risk stratification for posterior capsule rupture and vitreous loss”, Eye, 23, 31-37 (2009). PMID: 18327164.
About the Author
Paul Ursell

Paul Ursell is a consultant ophthalmologist at Epsom & St Helier University NHS Trust, Surrey, UK.

Related Case Study
Uncovering Ocular Comorbidity

| Contributed by Quidel

Related Product Profiles
Sulcus-based enhancement of visual quality

| Contributed by Medicontur Medical Engineering Ltd

Less Steps, More Vision

| Contributed by Medicontur Medical Engineering Ltd

Product Profiles

Access our product directory to see the latest products and services from our industry partners

Here
Most Popular
Register to The Ophthalmologist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Ophthalmologist magazine

Register