Mechanisms in the brain help us distinguish speech in a crowd
Zuckerman Institute, Columbia University (2023)
We now have a good explanation for how our brain follows a conversation while we’re in a noisy, crowded room, a discovery that could improve hearing aids.
The general idea for speech perception is that only the voice of the person you’re paying attention to gets processed by the brain, says Vinay Raghavan at Columbia University in New York. “But my problem with that idea is that when someone yells in a crowded place, we don’t ignore it because we’re focused on the person we’re talking to, we pick it up anyway.”
To better understand how we process multiple voices, Raghavan and his colleagues implanted electrodes in the brains of seven people to monitor the organ’s activity while they underwent surgery for epilepsy. The participants, who were awake during the operation, listened to a 30-minute audio clip with two voices.
During the half-hour period, participants were repeatedly asked to switch their focus between the two voices, one of which was male and the other female. The voices spoke over each other and were mostly the same volume, but at various points in the clip one was louder than the other, mimicking the varying volumes of background conversation in a crowded room.
The team then used this brain activity data to produce a model that predicted how the brain processes the quieter and louder voices and how that may differ depending on which voice the participant was asked to focus on.
The researchers found that the loudest of the two voices was encoded by both the primary auditory cortex, thought to be responsible for the conscious perception of soundand the secondary auditory cortex, responsible for more complex sound processingeven if the participant was told not to concentrate on the louder voice.
“This is the first study using neuroscience to show that your brain encodes speech that you aren’t paying attention to,” says Raghavan. “It opens the door to understanding how your brain processes things you don’t pay attention to.”
The researchers found that the softer voice was only processed by the brain, including in the primary and secondary cerebral cortex, when they asked participants to focus on that voice. It then took the brain about 95 milliseconds longer to process this voice as speech compared to when the participants were asked to focus on the louder voice.
“The findings suggest that the brain likely uses different mechanisms for encoding and representing these two different volumes of voices when background conversation is going on,” says Raghavan.
By targeting the mechanism used to perceive softer voices, hearing aids can be made more effective, says Raghavan. “If we could make a hearing aid that could tell who you’re paying attention to, we could increase the volume of just that person’s voice.”
The team plans to repeat the experiment using less invasive methods to record audio processing in the brain. “Ideally, we don’t want to implant something in your brain to get enough brain recordings to decode your attention,” says Raghavan.
Subjects: