Sonic Translation
New study reveals that facial recognition may not be based solely on visual experience
Alun Evans | | 3 min read | Learning
The belief that blind people have enhanced senses as a direct result of their sight loss has long existed. This concept is reiterated frequently in fiction dealing with blind characters. In the Marvel Universe, for instance, there’s a comic book – Daredevil – devoted to the idea. Daredevil’s titular protagonist – a lawyer blinded as a result of a childhood radioactive accident (this is Marvel, after all) – finds out he has developed both “radar sense” and “echolocation” as a result of his sight loss.
In the real world, there have been a number of scientific studies evidencing the claim to heightened senses after the loss – or absence – of another sensory faculty. For example, a Journal of Neuroscience study from 2012 indicated that people born deaf typically experience a neural remapping – their brains devoting neural processing power to touch or vision instead of sound. A PLOS ONE study in 2017 went so far as to employ MRI scans of “early blind” patients to highlight the structural and functional changes – or “rewiring” – of the brain in profound early blindness.
This surprising ability of our brains to effectively reorganize their neural networks in response to external environmental changes is known as neuroplasticity. And now this concept has been applied in another PLOS ONE study: “Sound-encoded faces activate the left fusiform face area in the early blind.”
The November 2023 study’s main focus – conducted by a team at the Department of Neuroscience at Georgetown University Medical Center – examined how auditory patterns are processed by the fusiform face area (FFA – the location in the inferior temporal cortex dedicated to facial recognition). Interestingly, the authors claimed in their abstract that they were attempting to demonstrate that facial recognition isn’t solely based on a person’s visual experience.
To test the hypothesis, the team employed a sensory substitution device (SSD) that converted basic two-dimensional images into sound. During the study itself, participants – both blind and sighted – were required to wear the SSD, a device composed of a head-mounted video camera (acting as an artificial retina) and headphones that, in real time, converted visual information into audio. Each session involved having participants recognize simple patterns and geometric shapes, with the stimuli becoming gradually more complex; simple lines eventually transformed into shapes that resembled houses and faces. Using functional MRI scans, the team was able to observe areas of the brain that were activated during participants’ experiences of the image-translated sounds. The researchers found that for blind participants, sound-activated brain activity occurred in the left fusiform face area, while activation in sighted participants occurred in the right fusiform area.
Though the PLOS ONE paper was a small-scale study of only 16 participants (6 blind and 10 sighted), the results are somewhat surprising, as they suggest that blind people can indeed recognize and differentiate basic facial shapes (for example, a happy face emoji) when they are translated into distinct sound patterns. The goal for the researchers involved in the project is to eventually be able to use real-life pictures in the study, as opposed to emojis and other heavily simplified symbols; however, to take this next step, the researchers first need to greatly increase the resolution of the SSD equipment.