Sign language recruits the same neural systems as spoken language

brain
Credit: Wikimedia Commons

(Medical Xpress)—Sign languages such as American Sign Language (ASL) comprise the same structural characteristics as spoken language, including tight grammatical constraints and rich expressiveness. Nonetheless, because its grammatical features are conveyed via gestures, some researchers have compared elements of sign language to the non-lexical, symbolic gestures that often accompany spoken language. Specifically, they have focused on so-called "classifier constructions," also known as "verbs of motion," which, in sign languages, include a root expressing the motion event, morphemes marking the manner and mode of direction, and a classifier that specifies the semantic category, such as "vehicle."

Despite the obvious grammatical constraints on these classifier constructions, it has been suggested that native signers may analyze them by recruiting the same networks that analyze symbolic gestures. In fact, some researchers have even suggested that verbs of motion are not linguistic morphemes at all, and are strictly gestural in character. This would seem to fly in the face of the segmental structure of the verbs, as well as their morphological and syntactic regularities.

In order to test this hypothesis, an international group of researchers recruited 19 deaf native learners of ASL and 19 normally hearing, native English speakers for a study that included the gathering of fMRI data. The results, published in the Proceedings of the National Academy of Sciences, strongly indicate that verbs of motion in ASL are not processed like spatial imagery or other nonlinguistic stimuli, but are, as expected, processed like grammatically structured language in the interior frontal gyrus and superior temporal sulcus of the left hemisphere.

Some participants were given a task prompt with instructions to determine the meaning of a video. The videos included toys moving along paths, or line drawings of people or animals engaged in various activities. After viewing each video, signers or gesturers were filmed describing the video, producing either sign language descriptions or gestural representations of what they had seen.

The fMRI participants were shown the original video followed immediately by both the signed and gestural descriptive clips, and were asked to determine via button press which one more accurately conveyed the meaning of the video.

The researchers determined that ASL verbs of motion produce a distinct set of neural activations in native signers that was completely different from the activations expressed by native signers watching gestural rather than linguistic stimuli. Thus, ASL verbs of motion are not processed by native signers as nonlinguistic, gestural imagery, but rather in terms of their linguistic structure.

However, their results indicate that lifelong users of sign languages have altered neural responses to non-linguistic manual gestures compared to people with normal hearing. Left frontal and temporal language processing regions showed activation in response to gesture only in signers. The authors note, "Although these same left hemisphere regions were more strongly activated by ASL, their activation by gesture exclusively in signers suggests that experience drives these areas to attempt to analyze visual-manual symbolic communication even when it lacks linguistic structure."

More information: "Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture." PNAS 2015 ; published ahead of print August 17, 2015, DOI: 10.1073/pnas.1510527112

Abstract
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual–manual modality with a nonlinguistic symbolic communicative system—gesture—further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages—supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network—demonstrating an influence of experience on the perception of nonlinguistic stimuli.

© 2015 Medical Xpress

Citation: Sign language recruits the same neural systems as spoken language (2015, August 28) retrieved 20 April 2024 from https://medicalxpress.com/news/2015-08-language-neural-spoken.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Language by mouth and by hand

1079 shares

Feedback to editors