Teaching New Dogs Old Tricks
Is mimicking human eye movements the key to improving machine vision?
Oscelle Boye | | News
As human eyes, heads, and bodies move, the information received by our retinas constantly changes; yet we are still able to stabilize vision to capture and recognize our surroundings. Attempting to understand how this is possible, Andrea Benucci and colleagues at the RIKEN Center for Brain Science in Wako, Japan, used hierarchical convolutional neural networks inspired by the mammalian visual system to explore the potential process optimization that leads to perceptual stability (1).
They first found that objects were better identified when the direction and magnitude of eye movements were included in the classification process, suggesting that human brains keep a record of (and account for) movements to maintain perception of stability. Additionally, simulating eye movements was shown to help the system process visual noise.
Though these results provide more insight into the human vision system, the lessons may also have benefits when applied to machine vision – for example, in self-driving cars – by enabling quicker and more accurate recognition of elements on the road.
What’s next for Benucci and his team? They will attempt to use silicon circuits and test their artificial neural networks in real-world applications.
- A Benucci, PLoS Comput Biol, 18, e1009928 (2022). PMID: 35286305.
- RIKEN (2022). Available at: https://bit.ly/3NFwU4O.
I have always been fascinated by stories. During my biomedical sciences degree, though I enjoyed wet lab sessions, I was truly in my element when sitting down to write up my results and find the stories within the data. Working at Texere gives me the opportunity to delve into a plethora of interesting stories, sharing them with a wide audience as I go.