Teaching New Dogs Old Tricks
Is mimicking human eye movements the key to improving machine vision?
Jed Boye | | News
As human eyes, heads, and bodies move, the information received by our retinas constantly changes; yet we are still able to stabilize vision to capture and recognize our surroundings. Attempting to understand how this is possible, Andrea Benucci and colleagues at the RIKEN Center for Brain Science in Wako, Japan, used hierarchical convolutional neural networks inspired by the mammalian visual system to explore the potential process optimization that leads to perceptual stability (1).
They first found that objects were better identified when the direction and magnitude of eye movements were included in the classification process, suggesting that human brains keep a record of (and account for) movements to maintain perception of stability. Additionally, simulating eye movements was shown to help the system process visual noise.
Though these results provide more insight into the human vision system, the lessons may also have benefits when applied to machine vision – for example, in self-driving cars – by enabling quicker and more accurate recognition of elements on the road.
What’s next for Benucci and his team? They will attempt to use silicon circuits and test their artificial neural networks in real-world applications.
- A Benucci, PLoS Comput Biol, 18, e1009928 (2022). PMID: 35286305.
- RIKEN (2022). Available at: https://bit.ly/3NFwU4O.