Adults, even the brightest ones, often struggle with learning new languages. Dr Nina Kazanina in the Department of Psychology at the University of Bristol explains why.

People comprehend their native language with great speed and accuracy, and without visible effort. Indeed, our ability to perform linguistic computations is remarkable, especially when compared with other cognitive domains in which our computational abilities may be rather modest.

For example, an average person is infinitely slower than a computer when it comes to adding up numbers or remembering facts. On the other hand, most humans surpass computers when it comes to language-related tasks such as recognising sounds and words, and comprehending sentences.

My work deals with one aspect of language processing, namely, the identification of sounds, which is needed for subsequent word recognition. Sound recognition is a complex task, because the same sounds may be spoken differently depending on the speaker’s sex, age, pitch of the voice or mood. In addition, people may whisper or shout, be in a quiet room or a noisy street. All of these, and many other factors, lead to huge variation in individual acoustic instances of the same sound. It is precisely this acoustic variation that for decades has caused problems for computational linguists and speech engineers building automatic speech recognition systems. Humans, however, even five-year-olds, can successfully recognise sounds and words and under-stand what other people say almost instantly.

So what allows humans to be so efficient at sound recognition and how does that impact on our ability to learn a new language? In order to answer this question, we used non-invasive techniques called electroencephalography and magneto-encephalography, which record electromagnetic signals from the brain while people listen to different speech sounds. We focused on activity in the auditory cortex, a region in the temporal lobe of the brain that is responsible for processing sound information. The results show that the auditory cortex of an adult speaker selectively preserves variation in speech that is meaningful in the listener’s language and disregards variation that is irrelevant for word meaning.

The learner may find themselves a prisoner of their native language

For example, in English the difference between the sounds ‘r’ and ‘l’ is meaningful and serves as a basis for distinguishing words like rice and lice or rack and lack; consequently, this difference is highlighted by the auditory cortex of an English speaker. On the other hand, a Japanese speaker’s brain will not notice the difference between ‘r’ and ‘l’ right away, because in Japanese these two sounds are used interchangeably. This strategy, which highlights only conceptually important variation in sounds, ensures the quickest way to interpret a word’s meaning.

Hence, what the brain perceives is not fully determined by the physical input to the ear but rather is filtered through the listener’s native language. Such selective – if not biased – perceptual abilities of adult listeners develop through their language experience during early years of life. As a result, the brain is wired optimally for the first (native) language communication. Unfortunately, this wiring may be less than ideal for learning a foreign language. The learner may find themselves a prisoner of their native language ‘regulations’ and be unable to perceive additional sound contrasts that are important for the new language. We are now trying to identify whether representations in the auditory cortex change as a result of continued exposure to a foreign language.

Source: University of Bristol