People track when talkers say 'uh' to predict what comes next

People track when talkers say 'uh' to predict what comes next
Speakers tend to say 'uh' before uncommon words. Credit: Tumisu on Pixabay; edited by H.R. Bosker

Spontaneous conversation is riddled with disfluencies such as pauses and 'uhm's: On average, people produce 6 disfluencies every 100 words. But disfluencies do not occur randomly. Instead, 'uh' typically occurs before 'hard-to-name' low-frequency words ('uh... automobile'). Previous experiments led by Hans Rutger Bosker from the Max Planck Institute for Psycholinguistics have shown that people can use disfluencies to predict upcoming low-frequency words. But Bosker and his colleagues went one step further. They tested whether listeners would actively track the occurrence of 'uh', even when it appeared in unexpected places.

Click on uh... the igloo

The researchers used eye-tracking, which measures people's glances toward objects on a screen. Two groups of Dutch participants saw two images on a screen (for instance, a hand and an igloo) and heard both fluent and disfluent instructions. However, one group heard a 'typical' talker say 'uh' before 'hard-to-name' low-frequency words ("Click on uh... the igloo"), while the other group heard an 'atypical' talker saying 'uh' before 'easy-to-name' high-frequency words ("Click on uh... the hand"). Would people in this second group track the unexpected occurrences of 'uh' and learn to look at the 'easy-to-name' object?

As expected, participants listening to the 'typical' talker already looked at the igloo upon hearing the disfluency ('uh'...; that is well before hearing 'igloo'). Interestingly, people listening to the 'atypical' talker learned to adjust this 'natural' prediction. Upon hearing a disfluency ('uh'...), they learned to look at the common object, even before hearing the word itself ('hand'). "We take this as evidence that listeners actively keep track of when and where talkers say 'uh' in spoken communication, adjusting what they predict will come next for different talkers," concludes Bosker.

Speakers with a foreign accent

Would listeners also adjust their expectations with a speaker? In a follow-up experiment, the same sentences were spoken by someone with a heavy Romanian accent. In this experiment, participants did learn to predict uncommon objects from a 'typical' non-native talker (saying 'uh' before low-frequency words). However, they did not learn to predict high-frequency referents from an 'atypical' non-native talker (saying 'uh' before high-frequency words)—even though the sentence materials were the same in the native vs. non-native experiment.

Geertje van Bergen, co-author on the paper, explains: "This probably indicates that hearing a few atypical disfluent instructions (e.g., the non-native talker saying 'uh' before common words like "hand" and "car") led listeners to infer that the non-native speaker had difficulty naming even simple words in Dutch. As such, they presumably took the non-native disfluencies as not predictive of the word to follow—in spite of the clear distributional cues indicating otherwise." This finding is interesting, as it reveals an interplay between 'disfluency tracking' and 'pragmatic inferencing': We only track disfluencies if we infer from the talker's voice that the talker is a 'reliable' uhm'er.

A hot topic in psycholinguistics

According to the authors, this is the first evidence of distributional learning in disfluency processing. "We've known about disfluencies triggering prediction for more than 10 years now, but we demonstrate that these predictive strategies are malleable. People actively track when particular talkers say 'uh' on a moment-by-moment basis, adjusting their predictions about what will come next," explains Bosker. Distributional learning has been a hot topic in psycholinguistics the past few years. "We extend this field with evidence for distributional learning of metalinguistic performance cues, namely disfluencies—highlighting the wide scope of distributional learning in language processing."


Explore further

Children understand familiar voices better than those of strangers

More information: Hans Rutger Bosker et al, Counting 'uhm's: How tracking the distribution of native and non-native disfluencies influences online language comprehension, Journal of Memory and Language (2019). DOI: 10.1016/j.jml.2019.02.006
Provided by Max Planck Society
Citation: People track when talkers say 'uh' to predict what comes next (2019, March 6) retrieved 26 May 2019 from https://medicalxpress.com/news/2019-03-people-track-talkers-uh.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
66 shares

Feedback to editors

User comments

Mar 06, 2019
I can't tell what the researchers used for examples thanks to elsevier, but I can tell that they didn't use DIY videos on youtube. Half of the videos are "uhs."

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more