Neuroscientists discover how brain recognizes speech in a noisy room
A common festive phenomenon that is of interest to researchers who are seeking to improve speech recognition technology is something that is called the “cocktail party effect”. It is the ability to focus one’s listening attention to a specific conversation or a single speaker while filtering out a range of other conversations and background noise.
Lead researcher Christopher Holdgraf from the University of California, Berkeley, and his colleagues recorded and measured people’s brain activity as the words of a previously unintelligible sentence suddenly became clear when a subject was told the meaning of the “garbled speech”.
They worked with epilepsy patients, who had had a portion of their skull removed and electrodes placed on the brain surface to track their seizures. Seven of these subjects took part in the scientists’ auditory test.
The researchers first played a very distorted, garbled sentence to each subject, which almost no one was able to understand. They then played a normal, easy to understand version of the same sentence and immediately repeated the garbled version.
The researchers explained that after hearing the clear version of the sentence, all the subjects understood the subsequent noisy version.
The brain recordings showed this moment of recognition as brain activity patterns in the areas of the brain that are known to be associated with processing sound and understanding speech.
When the subjects heard the very garbled sentence, the scientists reported that they saw little activity in those parts of the brain.Hearing the clearly understandable sentence then triggered patterns of activity in those brain areas.
The scientific revelation was seeing how that then altered the nature of the brain’s response when the subject heard the distorted, garbled phrase again. Auditory and speech processing areas then “lit up” and changed their pattern of activity over time, apparently tuning in to the words among the distortion.
They found that the brain actually changes the way it focuses on different parts of the sound. “When patients heard the clear sentences first, the auditory cortex (the part of the brain associated with processing sound) enhanced the speech signal,” they explained.
Holdgraf said they are starting to look for more subtle or complex relationships between the brain activity and the sound. “Rather than just looking at ‘up or down’, it’s looking at the details of how the brain activity changes across time, and how that activity relates to features in the sound,” he added.
This, he said, gets closer to the mechanisms behind perception.
By understanding the ways in which the brain filters out noise in the world, the researchers hope to be able to create devices that help people with speech and hearing impediments do the same thing.
“It is unbelievable how fast and plastic the brain is,” added co-author Prof Robert Knight. “[And] this is the first time we have any evidence on how it actually works in humans.”
Knight and his colleagues are aiming to use the findings to develop a speech decoder, a brain implant to interpret people’s imagined speech, which could help those with certain neurodegenerative diseases that affect their ability to speak.