In a discipline where micrometers matter and errors can permanently alter a patient’s life, ophthalmology training still relies heavily on one of the bluntest instruments available: time. Case numbers, procedures logged, or time used on a virtual-reality simulator all continue to function as proxies for real-life competence.
Rethinking what it means to be competent
Competence in ophthalmology is not an abstract concept. It is the demonstrable ability to perform complex technical, cognitive, and decision-making tasks safely, consistently, and independently. And yet these time-based systems assume that competence emerges naturally through exposure. Spend long enough in the operating room, the logic goes, and proficiency will eventually follow.
However, decades of educational research tells a different story. Our own research (1-3) has repeatedly demonstrated that learners acquire skills at vastly different rates. Some trainees do reach proficiency early, while others require more extended periods of training. So when progression is dictated by time rather than competence, two things might happen: capable trainees are held back when they are already competent, and underprepared ones are pushed forward when they still require more training.
Studies on cataract surgery training using simulation-based mastery learning also revealed wide variability in how long trainees needed to reach predefined proficiency benchmarks. Crucially, once those benchmarks were met, performance in the operating room improved – independent of training duration (4).
When legacy becomes liability
The apprenticeship model – “see one, do one, teach one” – has deep roots in surgical culture. Observation, gradual participation, and increasing responsibility remain central elements of ophthalmology training. These elements are not inherently flawed – but they are insufficient on their own.
Modern ophthalmic surgery takes place in a context of limited operating room availability, increasing subspecialization, and high expectations for patient safety. Teaching opportunities vary between supervisors, feedback is often informal, and assessment frequently relies on global impressions rather than structured criteria.
Our research has shown that expert judgment alone is insufficient to reliably assess surgical competence (1,5). Even experienced surgeons often disagree on what constitutes “good enough,” and their evaluations can be influenced by bias, familiarity, or context. And so, without shared standards and objective measures, the apprenticeship model risks producing variability rather than reliability. What once worked in a different era may now expose patients and trainees to unnecessary risk.
Competency-based education
Competency-Based Medical Education (CBME) is not simply a curricular tweak. It is a fundamental shift in focus - from time spent training to outcomes achieved.
At its core, CBME asks a radical question: which skills must a trainee reliably demonstrate before being allowed to operate on a patient’s eye?
This reframing has profound implications. Progression is no longer automatic with seniority. Instead, it depends on demonstrated competence across clearly defined domains – expectations are explicit, assessment is systematic, and feedback is structured.
Work in simulation-based training provides concrete examples of how this can be operationalized: define performance standards, measure skills objectively, and require mastery before advancement (6). This approach replaces vague expectations with transparent, measurable benchmarks.
Measurement is the unpleasant but necessary backbone of CBME. Technical skills can be measured reliably using assessment tools with evidence of validity – particularly in simulation settings. Metrics such as error rates, instrument handling, tissue damage, and procedural flow all offer objective insight into a trainee’s performance. Without such measurement, educators are left to guess.
This type of objective assessment protects patients, supports trainees, and provides educators with actionable information. It enables early identification of skill gaps and targeted remediation before patients are exposed to risk.
Barriers to implementation
So if the evidence supporting CBME is strong, why has implementation been so slow?
CBME challenges deeply ingrained hierarchies. It exposes variability in teaching and assessment practices, and it disrupts the notion of seniority-based privilege. It requires faculty development, shared assessment frameworks, and institutional commitment. It also introduces a level of transparency that can feel uncomfortable to some.
Resistance can also stem from concern about workload or loss of autonomy. While others worry that competence cannot be fully captured by metrics, despite evidence to the contrary. Often, these concerns reflect legitimate pressures within clinical practice, rather than any opposition to patient safety.
Therefore, understanding these barriers is essential – not to assign blame, but to address them thoughtfully.
The road ahead
The future of ophthalmology training must be intentional, measurable, and patient-centered.
This statement is neither radical nor unrealistic. Simulation-based mastery learning should be a standard prerequisite – not an adjunct – to operating room experience. Objective assessment tools should be embedded throughout training, not reserved for final evaluations. Faculty should be supported in developing assessment expertise alongside their surgical and clinical skills.
Crucially, progression should be flexible. Faster learners should progress without unnecessary delay, while those needing more training should receive it without stigma.
Competency as the gold standard
Ophthalmology demands precision, responsibility, and trust – and our training systems should evolve so that they continue to embody those principles.
Time-based training offers simplicity, but competence-based training offers safety. The growing evidence in ophthalmic surgical training has made one thing clear: excellence cannot be assumed – it must be demonstrated.
The tools exist for CBME. The only remaining question is whether we have the courage to let evidence – not habit – define how the future generations of ophthalmologists are trained. Competence must be measured, achieved, and sustained. Anything less is a risk we should no longer be willing to take.
References
- Thomsen et al., “Update on simulation-based surgical training and assessment in ophthalmology: a systematic review,” Ophthalmology, 122, 1111 (2015). Jun;122(6):1111-1130. PMID: 25864793.
- Petersen et al., “Pretraining of basic skills on a virtual reality vitreoretinal simulator,” Acta Ophthalmol., [Online ahead of print] (2022). PMID: 34609052.
- Thomsen et al., “Is there inter-procedural transfer of skills in intraocular surgery? A randomized controlled trial,” Acta Ophthalmol., [Online ahead of print] (2017). PMID: 28371367.
- Thomsen et al., “Operating Room Performance Improves after Proficiency-Based Virtual Reality Cataract Surgery Training,” Ophthalmology, 124, 524 (2017). PMID: 28017423.
- Borgersen et al., “Gathering Validity Evidence for Surgical Simulation: A Systematic Review,” Annals of Surgery., 267,1063 (2018). PMID: 29303808.
- Bjerrum et al., “Surgical simulation: Current practices and future perspectives for technical skills training,” Med Teach., 40, 668 (2018). PMID: 29911477.