Struggling to Learn a New Language? Blame It on Your Stable Brain

A study in patients with epilepsy is helping researchers understand how the brain manages the task of learning a new language while retaining our mother tongue. The study, by neuroscientists at UC San Francisco, sheds light on the age-old question of why it’s so difficult to learn a second language as an adult.

The somewhat surprising results gave the team a window into how the brain navigates the tradeoff between neuroplasticity — the ability to grow new connections between neurons when learning new things — and stability, which allows us to maintain the integrated networks of things we’ve already learned. The findings appear in the Aug. 30 issue of Proceedings of the National Academy of Sciences

“When learning a new language, our brains are somehow accommodating both of these forces as they’re competing against each other,” said Matt Leonard, PhD, assistant professor of neurological surgery and a member of the UCSF Weill Institute for Neurosciences.  

By using electrodes on the surface of the brain to follow high-resolution neural signals, the team found that clusters of neurons scattered throughout the speech cortex appear to fine-tune themselves as a listener gains familiarity with foreign sounds.  

“These are our first insights into what’s changing in the brain between first hearing the sounds of a foreign language and being able to recognize them,” said Leonard, who is a principal investigator on the study.  

Read more: University of California San Francisco

How does the brain process speech? We now know the answer, and it’s fascinating

Neuroscientists have known that speech is processed in the auditory cortex for some time, along with some curious activity within the motor cortex. How this last cortex is involved though, has been something of a mystery, until now. A new study by two NYU scientists reveals one of the last holdouts to a process of discovery which started over a century and a half ago. In 1861, French neurologist Pierre Paul Broca identified what would come to be known as “Broca’s area.” This is a region in the posterior inferior frontal gyrus.

This area is responsible for processing and comprehending speech, as well as producing it. Interestingly, a fellow scientist, whom Broca had to operate on, was post-op missing Broca’s area entirely. Yet, he was still able to speak. He couldn’t initially make complex sentences, however, but in time regained all speaking abilities. This meant another region had pitched in, and a certain amount of neuroplasticity was involved.

In 1871, German neurologist Carl Wernicke discovered another area responsible for processing speech through hearing, this time in the superior posterior temporal lobe. It’s now called Wernicke’s area. The model was updated in 1965 by the eminent behavioral neurologist, Norman Geschwind. The updated map of the brain is known as the Wernicke-Geschwind model.

Wernicke and Broca gained their knowledge through studying patients with damage to certain parts of the brain. In the 20th century, electrical brain stimulation began to give us an even greater understanding of the brain’s inner workings. Patients undergoing brain surgery in the mid-century were given weak electrical brain stimulation. The current allowed surgeons to avoid damaging critically important areas. But it also gave them more insight into what areas controlled what functions.

With the advent of the fMRI and other scanning technology, we were able to look at the activity in regions of the brain and how language travels across them. We now know that impulses associated with language go between Boca’s and Wernicke’s areas. Communication between the two help us understand grammar, how words sound, and their meaning. Another region, the fusiform gyrus, helps us classify words.

Read more: Big Think

The Brain Has Its Own “Autofill” Function for Speech

The world is an unpredictable place. But the brain has evolved a way to cope with the everyday uncertainties it encounters—it doesn’t present us with many of them, but instead resolves them as a realistic model of the world. The body’s central controller predicts every contingency, using its stored database of past experiences, to minimize the element of surprise. Take vision, for example: We rarely see objects in their entirety but our brains fill in the gaps to make a best guess at what we are seeing—and these predictions are usually an accurate reflection of reality.

The same is true of hearing, and neuroscientists have now identified a predictive textlike brain mechanism that helps us to anticipate what is coming next when we hear someone speaking. The findings, published this week in PLoS Biology, advance our understanding of how the brain processes speech. They also provide clues about how language evolved, and could even lead to new ways of diagnosing a variety of neurological conditions more accurately.

The new study builds on earlier findings that monkeys and human infants can implicitly learn to recognize artificial grammar, or the rules by which sounds in a made-up language are related to one another. Neuroscientist Yukiko Kikuchi of Newcastle University in England and her colleagues played sequences of nonsense speech sounds to macaques and humans. Consistent with the earlier findings, Kikuchi and her team found both species quickly learned the rules of the language’s artificial grammar. After this initial learning period the researchers played more sound sequences—some of which violated the fabricated grammatical rules. They used microelectrodes to record responses from hundreds of individual neurons as well as from large populations of neurons that process sound information. In this way they were able to compare the responses with both types of sequences and determine the similarities between the two species’ reactions.

Read more: Scientific American

New Study of the Words “A” and “The” Sheds Light on Language Acquisition

And one more question: When kids start using language, how much of their know-how is intrinsic, and how much is acquired by listening to others speak?

Now a study co-authored by an MIT professor uses a new approach to shed more light on this matter — a central issue in the area of language acquisition.

The results suggest that experience is an important component of early-childhood language usage although it doesn’t necessarily account for all of a child’s language facility. Moreover, the extent to which a child learns grammar by listening appears to change over time, with a large increase occurring around age 2 and a leveling off taking place in subsequent years.

“In this view, adult-like, rule-based [linguistic] development is the end-product of a construction of knowledge,” says Roger Levy, an MIT professor and co-author of a new paper summarizing the study. Or, as the paper states, the findings are consistent with the idea that children “lack rich grammatical knowledge at the outset of language learning but rapidly begin to generalize on the basis of structural regularities in their input.”

Read more: Neuroscience News

6 Potential Brain Benefits Of Bilingual Education

Brains, brains, brains. One thing we’ve learned at NPR Ed is that people are fascinated by brain research. And yet it can be hard to point to places where our education system is really making use of the latest neuroscience findings.

But there is one happy nexus where research is meeting practice: Bilingual education. “In the last 20 years or so, there’s been a virtual explosion of research on bilingualism,” says Judith Kroll, a professor at the University of California, Riverside.

Again and again, researchers have found, “bilingualism is an experience that shapes our brain for a lifetime,” in the words of Gigi Luk, an associate professor at Harvard’s Graduate School of Education.

At the same time, one of the hottest trends in public schooling is what’s often called dual-language or two-way immersion programs.

Read more: NPR

Brain ‘reads’ sentence same way in 2 languages

When the brain “reads” or decodes a sentence in English or Portuguese, its neural activation patterns are the same, researchers report.

Published in NeuroImage, the study is the first to show that different languages have similar neural signatures for describing events and scenes. By using a machine-learning algorithm, the research team was able to understand the relationship between sentence meaning and brain activation patterns in English and then recognize sentence meaning based on activation patterns in Portuguese.

The findings can be used to improve machine translation, brain decoding across languages, and, potentially, second language instruction.

“This tells us that, for the most part, the language we happen to learn to speak does not change the organization of the brain,” says Marcel Just, professor of psychology at Carnegie Mellon University.

“Semantic information is represented in the same place in the brain and the same pattern of intensities for everyone. Knowing this means that brain to brain or brain to computer interfaces can probably be the same for speakers of all languages,” Just says.

Read more: Futurity

In the brain, one area sees familiar words as pictures, another sounds out words

Skilled readers can quickly recognize words when they read because the word has been placed in a visual dictionary of sorts which functions separately from an area that processes the sounds of written words, say Georgetown University Medical Center (GUMC) neuroscientists. The visual dictionary idea rebuts a common theory that our brain needs to “sound out” words each time we see them.

This finding, published online in Neuroimage, matters because unraveling how the brain solves the complex task of reading can help in uncovering the brain basis of reading disorders, such as dyslexia, say the scientists.

“Beginning readers have to sound out words as they read, which makes reading a very long and laborious process,” says the study’s lead investigator, Laurie Glezer, PhD, a postdoctoral research fellow. The research was conducted in the Laboratory for Computational Cognitive Neuroscience at GUMC, led by Maximilian Riesenhuber, PhD.

“Even skilled readers occasionally have to sound out words they do not know. But once you become a fluent, skilled reader you no longer have to sound out words you are familiar with, you can read them instantly,” Glezer explains. “We show that the brain has regions that specialize in doing each of the components of reading. The area that is processing the visual piece is different from the area that is doing the sounding out piece.”

Read more: ScienceDaily

Synaesthesia could help us understand how the brain processes language

When we speak, listen, read, or write, almost all of the language processing that happens in our brains goes on below the level of conscious awareness. We might be aware of grasping for a particular forgotten word, but we don’t actively think about linguistic concepts like morphemes (the building blocks of words, like the past tense morpheme “-ed”).

Psycholinguists try to delve under the surface to figure out what’s actually going on in the brain, and how well this matches up with our theoretical ideas of how languages fit together. For instance, linguists talk about morphemes like “-ed”, but do our brains actually work with morphemes when we’re producing or interpreting language? That is, do theoretical linguistic concepts have any psychological reality? An upcoming paper in the journal Cognition suggests an unusual way to investigate this: by testing synaesthetes.

Read more: The Guardian