Neuroscientists discover a brain signal that indicates whether speech has been understood

February 22, 2018, Trinity College Dublin
When a listener understands speech, a strong response signal is seen over the mid back part of their scalp (top row; blue and green waveforms show response at two specific recording locations). When they can't understand (because, for example, the speech is played backwards), the signal completely disappears (bottom row; red and yellow waveforms show the lack of response at the same two specific recording locations). Credit: Professor Ed Lalor.

Neuroscientists from Trinity College Dublin and the University of Rochester have identified a specific brain signal associated with the conversion of speech into understanding. The signal is present when the listener has understood what they have heard, but it is absent when they either did not understand, or weren't paying attention.

The uniqueness of the signal means that it could have a number of potential applications, such as tracking language development in infants, assessing brain function in unresponsive patients, or determining the early onset of dementia in older persons.

During our everyday interactions, we routinely speak at rates of 120 - 200 words per minute. For listeners to understand at these rates - and to not lose track of the conversation - their brains must comprehend the meaning of each of these words very rapidly. It is an amazing feat of the human brain that we do this so easily—especially given that the meaning of words can vary greatly depending on the context. For example, the word bat means very different things in the following two sentences: "I saw a bat flying overhead last night"; "The baseball player hit a homerun with his favourite bat."

However, precisely how our brains compute the meaning of words in context has, until now, remained unclear. The new approach, published today in the international journal Current Biology, shows that our brains perform a rapid computation of the similarity in meaning that each word has to the words that have come immediately before it.

To discover this, the researchers began by exploiting state-of-the-art techniques that allow modern computers and smartphones to "understand" speech. These techniques are quite different to how humans operate. Human evolution has been such that babies come more or less hardwired to learn how to speak based on a relatively small number of speech examples. Computers on the other hand need a tremendous amount of training, but because they are fast, they can accomplish this training very quickly. Thus, one can train a computer by giving it a lot of examples (e.g., all of Wikipedia) and by asking it to recognise which pairs of words appear together a lot and which don't. By doing this, the computer begins to "understand" that words that appear together regularly, like "cake" and "pie", must mean something similar. And, in fact, the computer ends up with a set of numerical measures capturing how similar any word is to any other.

The newly discovered brain signal may be useful in determining whether a listener has truly understood spoken instructions. Credit: Ben White on Unsplash.

To test if human brains actually compute the similarity between words as we listen to speech, the researchers recorded electrical brainwave signals recorded from the human scalp - a technique known as electroencephalography or EEG - as participants listened to a number of audiobooks. Then, by analysing their brain activity, they identified a specific response that reflected how similar or different a given word was from the that preceded it in the story.

Crucially, this signal disappeared completely when the subjects either could not understand the speech (because it was too noisy), or when they were just not paying attention to it. Thus, this signal represents an extremely sensitive measure of whether or not a person is truly understanding the speech they are hearing, and, as such, it has a number of potential important applications.

Ussher Assistant Professor in Trinity College Dublin's School of Engineering, Trinity College Institute of Neuroscience, and Trinity Centre for Bioengineering, Ed Lalor, led the research.

Professor Lalor said: "Potential applications include testing in infants, or determining the level of in patients in a reduced state of consciousness. The presence or absence of the signal may also confirm if a person in a job that demands precision and speedy reactions - such as an air traffic controller, or soldier—has understood the instructions they have received, and it may perhaps even be useful for testing for the onset of dementia in older people based on their ability to follow a conversation."

"There is more work to be done before we fully understand the full range of computations that our brains perform when we understand speech. However, we have already begun searching for other ways that our brains might compute meaning, and how those computations differ from those performed by computers. We hope the new approach will make a real difference when applied in some of the ways we envision."

Explore further: Researchers pinpoint when our brains convert speech sounds into meaning

More information: Current Biology (2018). DOI: 10.1016/j.cub.2018.01.080 ,

Related Stories

Researchers pinpoint when our brains convert speech sounds into meaning

September 25, 2015
Researchers from Trinity College Dublin have identified the precise moment our brains convert speech sounds into meaning. They have, for the first time, shown that fine-grained speech processing details can be extracted from ...

Study listens in on speech development in early childhood

January 15, 2018
If you've ever listened in on two toddlers at play, you might have wondered how much of their babbling might get lost in translation. A new study from the University of Toronto provides surprising insights into how much children ...

Simply listening prompts speech practice before an infant's first words

July 9, 2015
Infants take in the sounds of various languages indiscriminately until about 8 months of age, when their brains start to focus only on the predominant language they hear around them, according to researchers. But, they say, ...

What the pupils tells us about language

June 15, 2017
The meaning of a word is enough to trigger a reaction in our pupil: when we read or hear a word with a meaning associated with luminosity ("sun," "shine," etc.), our pupils contract as they would if they were actually exposed ...

In loud rooms our brains 'hear' in a different way – new findings

May 6, 2016
When we talk face to face, we exchange many more signals than just words. We communicate using our body posture, facial expressions and head and eye movements; but also through the rhythms that are produced when someone is ...

Recommended for you

Focus on early stage of illness may be key to treating ALS, study suggests

March 22, 2018
A new kind of genetically engineered mouse and an innovation in how to monitor those mice during research have shed new light on the early development of an inherited form of amyotrophic lateral sclerosis (ALS).

Flow of spinal fluid disrupted in inherited developmental disorder

March 22, 2018
Scientists have pinpointed the mechanism behind hydrocephalus, an accumulation of cerebrospinal fluid around the brain, in an inherited developmental disorder called Noonan syndrome.

Obesity trigger identified within the human gut

March 22, 2018
The key chemical for happiness and sadness, serotonin, is also a force in our body's weight gain and calorie control, and scientists say more research could reduce obesity rates.

New wearable brain scanner allows patients to move freely for the first time

March 21, 2018
A new generation of brain scanner, that can be worn like a helmet allowing patients to move naturally whilst being scanned, has been developed by researchers at the Sir Peter Mansfield Imaging Centre, University of Nottingham ...

International team confirms new genetic mutation link to amyotrophic lateral sclerosis

March 21, 2018
Kinesin family member 5A (KIF5A), a gene previously linked to two rare neurodegenerative disorders, has been definitively connected to amyotrophic lateral sclerosis (ALS) by an international team from several of the world's ...

New ALS gene points to common role of cytoskeleton in disease

March 21, 2018
An international team of researchers led by John Landers, PhD, at UMass Medical School, and Bryan Traynor, MD, PhD, at the National Institute on Aging at the National Institutes of Health (NIH), has identified KIF5A as a ...


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.