Whether our speech is fast or slow, we say about the same

January 17, 2017 by David Orenstein  
A new study finds that whether we talk fast or slow we all communicate about the same amount of information in a given time. Credit: Brown University

The purpose of speech is communication, not speed—so perhaps some new research findings, while counterintuitive, should come as no surprise. Whether we speak quickly or slowly, the new study suggests, we end up conveying information at about the same rate, because faster speech packs less information in each utterance.

The study suggests we tend to converse within a narrow channel of communication data so that we do not provide too much or too little at a given time, said Uriel Cohen Priva, author of the study in the March issue of Cognition and assistant professor in the Department of Cognitive, Linguistic and Psychological Sciences at Brown University.

"It seems the constraints on how much information per second we should transmit are fairly strict, or stricter than we thought they were," Cohen Priva said.

In information theory, rarer word choices convey greater "lexical information," while more complicated syntax, such as the passive voice, conveys greater "structural information." To stay within the channel, those who talk quickly speak with more common words and simpler syntax, while those with a slower pace tend to use rarer, more unexpected words and more complicated wordings, Cohen Priva found.

The study provides only hints about why a constrained information rate might govern conversation, Cohen Priva said. It could derive from either a speaker's difficulty in formulating and uttering too much information too quickly or from a listener's difficulty in processing and comprehending delivered at too fast a pace.

Analyzing speech

To conduct the study, Cohen Priva analyzed two independent troves of conversational data: the Switchboard Corpus, which contains 2,400 annotated telephone conversations, and the Buckeye Corpus, which consists of 40 lengthy interviews. In total, the data included the speech of 398 people.

Cohen Priva made several measurements on all that speech to determine each speaker's information rate—how much lexical and structural information they conveyed in how much time—and the —how much they said in that time.

Deriving meaningful statistics required making complex calculations to determine the relative frequency of words both on their own and given the words that preceded and followed them. Cohen Priva compared how long people take to say each word on average vs. how long a particular speaker required. He also measured how often each speaker used the passive voice, compared to the active voice, and in all the calculations accounted for each person's age, gender, the speech rate of the other member of the conversation, and other possible confounds.

Ultimately he found across the two independent dimensions—lexical and structural—and the two independent data sources—Switchboard and Buckeye—that the same statistically significant correlation held true: as speech sped up, the information rate declined.

"We could assume that there are widely different capacities of information per second that people use in speech and that each of them is possible and you can observe each and every one," Cohen Priva said. "But had that been the case, then finding these effects would have been very difficult to do. Instead, it's reliably found in two corpora in two different domains."

Do gender differences offer a clue?

Cohen Priva found a key difference involving gender that might offer a clue about why conversation has an apparently constrained information rate. It may be a socially imposed constraint for the listener's benefit.

On average, while both men and women exhibited the main trend, men conveyed more information than women at the same speech rate. There is no reason to believe that the ability to convey information at a given rate differs by gender, Cohen Priva said. Instead, he hypothesizes, women may tend to be more concerned with making sure their listeners understand what they are saying. Other studies, for example, have shown that in conversation women are more likely than men to "backchannel," or provide verbal cues like "uh huh" to confirm understanding as the dialogue proceeds.

Cohen Priva said the study has the potential to shed some light on the way people craft their utterances. One hypothesis in the field is that people choose what they intend to say and then slow their speech as they utter more rare or difficult words (e.g. if harder, then slower). But he said his data is consistent with a hypothesis that the overall speech rate dictates word choice and syntax (e.g. if faster, then simpler).

"We need to consider a model in which fast speakers consistently choose different types of words or have a preference for different types of words or structures," he said.

In other words, how one speaks appears related to how quickly one speaks.

Explore further: In loud rooms our brains 'hear' in a different way – new findings

More information: Uriel Cohen Priva, Not so fast: Fast speech correlates with lower lexical and structural information, Cognition (2017). DOI: 10.1016/j.cognition.2016.12.002

Related Stories

In loud rooms our brains 'hear' in a different way – new findings

May 6, 2016
When we talk face to face, we exchange many more signals than just words. We communicate using our body posture, facial expressions and head and eye movements; but also through the rhythms that are produced when someone is ...

Socializing helps elderly modify interactions

October 13, 2015
Despite the stereotype that older adults often ramble or talk off topic, seniors who enjoy socializing are able to adapt their conversations to a listener's age, says a University of Michigan researcher.

Microsoft claims its new speech recognition system on par with human capabilities

October 25, 2016
(Tech Xplore)—Engineers at Microsoft have written a paper describing their new speech recognition system and claim that the results indicate that their system is as good at recognizing conversational speech as humans. The ...

Researcher uses eye tracking to study linguistics

April 11, 2016
For Clara Cohen, language is all about patterns. The postdoctoral psychology researcher has been interested in linguistic patterns since she was an undergraduate learning Russian, and now, thanks to advances in technology, ...

Recommended for you

Intermittent fasting found to increase cognitive functions in mice

December 12, 2017
(Medical Xpress)—The Daily Mail spoke with the leader of a team of researchers with the National Institute on Aging in the U.S. and reports that they have found that putting mice on a diet consisting of eating nothing every ...

Neuroscientists show deep brain waves occur more often during navigation and memory formation

December 12, 2017
UCLA neuroscientists are the first to show that rhythmic waves in the brain called theta oscillations happen more often when someone is navigating an unfamiliar environment, and that the more quickly a person moves, the more ...

How Zika virus induces congenital microcephaly

December 12, 2017
Epidemiological studies show that in utero fetal infection with the Zika virus (ZIKV) may lead to microcephaly, an irreversible congenital malformation of the brain characterized by an incomplete development of the cerebral ...

Researchers find common psychological traits in group of Italians aged 90 to 101

December 12, 2017
In remote Italian villages nestled between the Mediterranean Sea and mountains lives a group of several hundred citizens over the age of 90. Researchers at the University of Rome La Sapienza and University of California San ...

Presurgical imaging may predict whether epilepsy surgery will work

December 11, 2017
Surgery to remove a part of the brain to give relief to patients with epilepsy doesn't always result in complete seizure relief, but statisticians at Rice University have developed a method for integrating neuroimaging scans ...

Selecting sounds: How the brain knows what to listen to

December 11, 2017
How is it that we are able—without any noticeable effort—to listen to a friend talk in a crowded café or follow the melody of a violin within an orchestra?

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.