A team of researchers from the University of Arizona, Northwestern University, and ETH Zürich have developed a groundbreaking eye-tracking technique that uses single-shot deflectometry to reconstruct dense 3D surface data of the eye. Published in Nature Communications, the study suggests a major leap forward in gaze estimation accuracy, offering potential applications in virtual reality (VR), clinical diagnostics, psychology research, and more.
Current eye-tracking technologies rely largely on image-based or sparse reflection-based methods that can be limited in both accuracy and robustness. These techniques typically extract features like the pupil or iris from 2D images, or else use a few infrared light sources to estimate gaze, resulting in gaze estimation errors typically in the range of 0.5° to 1.5,° with these errors generally being hard to properly assess and quantify.
In contrast, the new method is said to achieve unprecedented precision by capturing over 40,000 reflection points from the eye surface in a single frame using deflectometry – a technique originally developed to measure the shape of reflective surfaces like lenses and mirrors. This dense 3D data enables accurate mapping of the eye’s surface shape and normals, providing the foundation for much more accurate gaze direction estimation.
In lab-based tests using a realistic model eye, the system achieved relative gaze errors as low as 0.08° with precision down to 0.02,° outperforming all existing techniques, the authors report. When tested on live human subjects, the system still delivered impressive results, with accuracy ranging from 0.46° to 0.97,° despite the added complexities of eye movement and head positioning.
Key to the approach is the use of a novel single-shot phase measuring deflectometry setup and a robust calibration process that eliminates the need for traditional markers. The researchers also developed a customized algorithm to refine the gaze direction based on geometric assumptions of the eye.
The technology not only offers enhanced precision but also provides a detailed 3D map of the eye surface, which could be used for real-time correction of vision impairments in VR/AR devices.