Researchers move one step closer to understanding how the brain processes multiple conversations at once

hear
Credit: Pixabay/CC0 Public Domain

Conducting a discussion in a noisy place can be challenging when other conversations and background noises interfere with our ability to focus attention on our conversation partner. How the brain deals with the abundance of sounds in our environments, and prioritizes among them, has been a topic of debate among cognitive neuroscientists for many decades.

Often referred to as the "Cocktail Party Problem," its central question focuses on whether we can absorb information from a few speakers in parallel, or whether we are limited to understanding speech from only one at a time.

One reason this question is difficult to answer is that attention is an internal state not directly accessible to researchers. By measuring the activity of listeners as they attempt to focus attention on a single speaker and ignore a task-irrelevant one, we can gain insight into the internal operations of attention and how these competing speech stimuli are represented and processed by the brain.

In a study recently published in the journal eLife, researchers from Israel's Bar-Ilan University set out to explore whether words and phrases are identified linguistically or just represented in the brain as "acoustic noise," with no further linguistic processing applied.

"Answering this question helps us better understand the capacity and limitations of the human speech-processing system. It also gives insight into how attention helps us deal with the multitude of stimuli in our environments—helping to focus primarily on the task-at-hand, while also monitoring what is happening around us," says Dr. Elana Zion Golumbic, of Bar-Ilan University's Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the study.

Zion Golumbic and team measured brain activity of human listeners as they listened to two speech stimuli, each presented to a different ear. Participants were instructed to focus their attention on the content of one speaker, and to ignore the other.

The researchers found evidence that the so-called unattended speech, generated from background conversations and noise, is processed at both acoustic and linguistic levels, with responses observed in auditory and language-related areas of the brain.

Additionally, they found that the brain response to the attended speaker in language-related brain regions was stronger when it "competed" with other speech (in comparison to non-speech competition). This suggests that the two -inputs compete for the same processing resources, which may underlie the increased listening effort required for staying focused when many people talk at once.

The study contributes to efforts to understand how the brain deals with the abundance of auditory inputs in our . It has theoretical implications for how we understand the nature of attentional selection in the brain. It also carries substantial applicative potential for guiding the design of smart assistive devices to help individuals focus their attention better, or navigate noisy environments. The methods developed here also provide a useful new approach for testing the basis for in the ability to focus attention in noisy environments.

More information: Paz Har-shai Yahav et al, Linguistic processing of task-irrelevant speech at a Cocktail Party, eLife (2021). DOI: 10.7554/eLife.65096

Journal information: eLife
Citation: Researchers move one step closer to understanding how the brain processes multiple conversations at once (2021, May 10) retrieved 26 April 2024 from https://medicalxpress.com/news/2021-05-closer-brain-multiple-conversations.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Our ability to focus on one voice in crowds is triggered by voice pitch

28 shares

Feedback to editors