Whether speaking Turkish or Norwegian, the brain’s language network looks the same

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes.

However, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts. MIT neuroscientists have now performed brain imaging studies of speakers of 45 different languages. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers.

The findings, while not surprising, establish that the location and key properties of the language network appear to be universal. The work also lays the groundwork for future studies of linguistic elements that would be difficult or impossible to study in English speakers because English doesn’t have those features.

“This study is very foundational, extending some findings from English to a broad range of languages,” says Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “The hope is that now that we see that the basic properties seem to be general across languages, we can ask about potential differences between languages and language families in how they are implemented in the brain, and we can study phenomena that don’t really exist in English.”

Fedorenko is the senior author of the study, which appears today in Nature Neuroscience. Saima Malik-Moraleda, a PhD student in the Speech and Hearing Bioscience and Technology program at Harvard University, and Dima Ayyash, a former research assistant, are the lead authors of the paper.

Read more: MIT News

AI Detects Autism Speech Patterns Across Different Languages

A new study led by Northwestern University researchers used machine learning—a branch of artificial intelligence—to identify speech patterns in children with autism that were consistent between English and Cantonese, suggesting that features of speech might be a useful tool for diagnosing the condition.

Undertaken with collaborators in Hong Kong, the study yielded insights that could help scientists distinguish between genetic and environmental factors shaping the communication abilities of people with autism, potentially helping them learn more about the origin of the condition and develop new therapies.

Children with autism often talk more slowly than typically developing children, and exhibit other differences in pitch, intonation and rhythm. But those differences (called “prosodic differences’” by researchers) have been surprisingly difficult to characterize in a consistent, objective way, and their origins have remained unclear for decades.

However, a team of researchers led by Northwestern scientists Molly Losh and Joseph C.Y. Lau, along with Hong Kong-based collaborator Patrick Wong and his team, successfully used supervised machine learning to identify speech differences associated with autism.

The data used to train the algorithm were recordings of English- and Cantonese-speaking young people with and without autism telling their own version of the story depicted in a wordless children’s picture book called “Frog, Where Are You?”

The results were published in the journal PLOS One on June 8, 2022.

“When you have languages that are so structurally different, any similarities in speech patterns seen in autism across both languages are likely to be traits that are strongly influenced by the genetic liability to autism,” said Losh, who is the Jo Ann G. and Peter F. Dolle Professor of Learning Disabilities at Northwestern.

“But just as interesting is the variability we observed, which may point to features of speech that are more malleable, and potentially good targets for intervention.”

Lau added that the use of machine learning to identify the key elements of speech that were predictive of autism represented a significant step forward for researchers, who have been limited by English language bias in autism research and humans’ subjectivity when it came to classifying speech differences between people with autism and those without.

“Using this method, we were able to identify features of speech that can predict the diagnosis of autism,” said Lau, a postdoctoral researcher working with Losh in the Roxelyn and Richard Pepper Department of Communication Sciences and Disorders at Northwestern.

“The most prominent of those features is rhythm. We’re hopeful that this study can be the foundation for future work on autism that leverages machine learning.”

The researchers believe that their work has the potential to contribute to improved understanding of autism. Artificial intelligence has the potential to make diagnosing autism easier by helping to reduce the burden on healthcare professionals, making autism diagnosis accessible to more people, Lau said. It could also provide a tool that might one day transcend cultures, because of the computer’s ability to analyze words and sounds in a quantitative way regardless of language.

Read more: Neuroscience News

How Abstract Concepts Are Represented in the Brain Across Cultures and Languages

Researchers at Carnegie Mellon University have explored the regions of the brain where concrete and abstract concepts materialize. A new study now explores if people who grow up in different cultures and speak different languages form these concepts in the same regions of the brain.

“We wanted to look across languages to see if our cultural backgrounds influence how we understand, how we perceive abstract ideas like justice,” said Roberto Vargas, a doctoral candidate in psychology at the Dietrich College of Humanities and Social Sciences and lead author on the study.

Vargas is continuing fundamental research in neural and semantic organization initiated by Marcel Just, the D.O. Hebb University Professor of Psychology. Just began this process more than 30 years ago by scanning the brains of participants using a functional magnetic resonance imaging (fMRI) machine.

His research team began by identifying the regions of the brain that light up for concrete objects, like an apple, and later moved to abstract concepts from physics like force and gravity.

The latest study took the evaluation of abstract concepts one step further by exploring the regions of the brain that fire for abstract objects based on language. In this case, the researchers studied people whose first language is Mandarin or English.

“The lab’s research is progress to study universalities of not only single concept representations, but also representations of larger bodies of knowledge such as scientific and technical knowledge,” Just said. “Cultures and languages can give us a particular perspective of the world, but our mental filing cabinets are all very similar.”

According to Vargas, there is a fairly generalizable set of hardware, or network of brain regions, that people leverage when thinking about abstract information, but how people use these tools varies depending on culture and the meaning of the word.

Read more: Neuroscience News

Artificial intelligence sheds light on how the brain processes language

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion. 

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain.

Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.

“The better the model is at predicting the next word, the more closely it fits the human brain,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study, which appears this week in the Proceedings of the National Academy of Sciences. Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper.

Read more: MIT

Struggling to Learn a New Language? Blame It on Your Stable Brain

A study in patients with epilepsy is helping researchers understand how the brain manages the task of learning a new language while retaining our mother tongue. The study, by neuroscientists at UC San Francisco, sheds light on the age-old question of why it’s so difficult to learn a second language as an adult.

The somewhat surprising results gave the team a window into how the brain navigates the tradeoff between neuroplasticity — the ability to grow new connections between neurons when learning new things — and stability, which allows us to maintain the integrated networks of things we’ve already learned. The findings appear in the Aug. 30 issue of Proceedings of the National Academy of Sciences

“When learning a new language, our brains are somehow accommodating both of these forces as they’re competing against each other,” said Matt Leonard, PhD, assistant professor of neurological surgery and a member of the UCSF Weill Institute for Neurosciences.  

By using electrodes on the surface of the brain to follow high-resolution neural signals, the team found that clusters of neurons scattered throughout the speech cortex appear to fine-tune themselves as a listener gains familiarity with foreign sounds.  

“These are our first insights into what’s changing in the brain between first hearing the sounds of a foreign language and being able to recognize them,” said Leonard, who is a principal investigator on the study.  

Read more: University of California San Francisco

How does the brain process speech? We now know the answer, and it’s fascinating

Neuroscientists have known that speech is processed in the auditory cortex for some time, along with some curious activity within the motor cortex. How this last cortex is involved though, has been something of a mystery, until now. A new study by two NYU scientists reveals one of the last holdouts to a process of discovery which started over a century and a half ago. In 1861, French neurologist Pierre Paul Broca identified what would come to be known as “Broca’s area.” This is a region in the posterior inferior frontal gyrus.

This area is responsible for processing and comprehending speech, as well as producing it. Interestingly, a fellow scientist, whom Broca had to operate on, was post-op missing Broca’s area entirely. Yet, he was still able to speak. He couldn’t initially make complex sentences, however, but in time regained all speaking abilities. This meant another region had pitched in, and a certain amount of neuroplasticity was involved.

In 1871, German neurologist Carl Wernicke discovered another area responsible for processing speech through hearing, this time in the superior posterior temporal lobe. It’s now called Wernicke’s area. The model was updated in 1965 by the eminent behavioral neurologist, Norman Geschwind. The updated map of the brain is known as the Wernicke-Geschwind model.

Wernicke and Broca gained their knowledge through studying patients with damage to certain parts of the brain. In the 20th century, electrical brain stimulation began to give us an even greater understanding of the brain’s inner workings. Patients undergoing brain surgery in the mid-century were given weak electrical brain stimulation. The current allowed surgeons to avoid damaging critically important areas. But it also gave them more insight into what areas controlled what functions.

With the advent of the fMRI and other scanning technology, we were able to look at the activity in regions of the brain and how language travels across them. We now know that impulses associated with language go between Boca’s and Wernicke’s areas. Communication between the two help us understand grammar, how words sound, and their meaning. Another region, the fusiform gyrus, helps us classify words.

Read more: Big Think

The Brain Has Its Own “Autofill” Function for Speech

The world is an unpredictable place. But the brain has evolved a way to cope with the everyday uncertainties it encounters—it doesn’t present us with many of them, but instead resolves them as a realistic model of the world. The body’s central controller predicts every contingency, using its stored database of past experiences, to minimize the element of surprise. Take vision, for example: We rarely see objects in their entirety but our brains fill in the gaps to make a best guess at what we are seeing—and these predictions are usually an accurate reflection of reality.

The same is true of hearing, and neuroscientists have now identified a predictive textlike brain mechanism that helps us to anticipate what is coming next when we hear someone speaking. The findings, published this week in PLoS Biology, advance our understanding of how the brain processes speech. They also provide clues about how language evolved, and could even lead to new ways of diagnosing a variety of neurological conditions more accurately.

The new study builds on earlier findings that monkeys and human infants can implicitly learn to recognize artificial grammar, or the rules by which sounds in a made-up language are related to one another. Neuroscientist Yukiko Kikuchi of Newcastle University in England and her colleagues played sequences of nonsense speech sounds to macaques and humans. Consistent with the earlier findings, Kikuchi and her team found both species quickly learned the rules of the language’s artificial grammar. After this initial learning period the researchers played more sound sequences—some of which violated the fabricated grammatical rules. They used microelectrodes to record responses from hundreds of individual neurons as well as from large populations of neurons that process sound information. In this way they were able to compare the responses with both types of sequences and determine the similarities between the two species’ reactions.

Read more: Scientific American

New Study of the Words “A” and “The” Sheds Light on Language Acquisition

And one more question: When kids start using language, how much of their know-how is intrinsic, and how much is acquired by listening to others speak?

Now a study co-authored by an MIT professor uses a new approach to shed more light on this matter — a central issue in the area of language acquisition.

The results suggest that experience is an important component of early-childhood language usage although it doesn’t necessarily account for all of a child’s language facility. Moreover, the extent to which a child learns grammar by listening appears to change over time, with a large increase occurring around age 2 and a leveling off taking place in subsequent years.

“In this view, adult-like, rule-based [linguistic] development is the end-product of a construction of knowledge,” says Roger Levy, an MIT professor and co-author of a new paper summarizing the study. Or, as the paper states, the findings are consistent with the idea that children “lack rich grammatical knowledge at the outset of language learning but rapidly begin to generalize on the basis of structural regularities in their input.”

Read more: Neuroscience News

6 Potential Brain Benefits Of Bilingual Education

Brains, brains, brains. One thing we’ve learned at NPR Ed is that people are fascinated by brain research. And yet it can be hard to point to places where our education system is really making use of the latest neuroscience findings.

But there is one happy nexus where research is meeting practice: Bilingual education. “In the last 20 years or so, there’s been a virtual explosion of research on bilingualism,” says Judith Kroll, a professor at the University of California, Riverside.

Again and again, researchers have found, “bilingualism is an experience that shapes our brain for a lifetime,” in the words of Gigi Luk, an associate professor at Harvard’s Graduate School of Education.

At the same time, one of the hottest trends in public schooling is what’s often called dual-language or two-way immersion programs.

Read more: NPR

Brain ‘reads’ sentence same way in 2 languages

When the brain “reads” or decodes a sentence in English or Portuguese, its neural activation patterns are the same, researchers report.

Published in NeuroImage, the study is the first to show that different languages have similar neural signatures for describing events and scenes. By using a machine-learning algorithm, the research team was able to understand the relationship between sentence meaning and brain activation patterns in English and then recognize sentence meaning based on activation patterns in Portuguese.

The findings can be used to improve machine translation, brain decoding across languages, and, potentially, second language instruction.

“This tells us that, for the most part, the language we happen to learn to speak does not change the organization of the brain,” says Marcel Just, professor of psychology at Carnegie Mellon University.

“Semantic information is represented in the same place in the brain and the same pattern of intensities for everyone. Knowing this means that brain to brain or brain to computer interfaces can probably be the same for speakers of all languages,” Just says.

Read more: Futurity

In the brain, one area sees familiar words as pictures, another sounds out words

Skilled readers can quickly recognize words when they read because the word has been placed in a visual dictionary of sorts which functions separately from an area that processes the sounds of written words, say Georgetown University Medical Center (GUMC) neuroscientists. The visual dictionary idea rebuts a common theory that our brain needs to “sound out” words each time we see them.

This finding, published online in Neuroimage, matters because unraveling how the brain solves the complex task of reading can help in uncovering the brain basis of reading disorders, such as dyslexia, say the scientists.

“Beginning readers have to sound out words as they read, which makes reading a very long and laborious process,” says the study’s lead investigator, Laurie Glezer, PhD, a postdoctoral research fellow. The research was conducted in the Laboratory for Computational Cognitive Neuroscience at GUMC, led by Maximilian Riesenhuber, PhD.

“Even skilled readers occasionally have to sound out words they do not know. But once you become a fluent, skilled reader you no longer have to sound out words you are familiar with, you can read them instantly,” Glezer explains. “We show that the brain has regions that specialize in doing each of the components of reading. The area that is processing the visual piece is different from the area that is doing the sounding out piece.”

Read more: ScienceDaily

Synaesthesia could help us understand how the brain processes language

When we speak, listen, read, or write, almost all of the language processing that happens in our brains goes on below the level of conscious awareness. We might be aware of grasping for a particular forgotten word, but we don’t actively think about linguistic concepts like morphemes (the building blocks of words, like the past tense morpheme “-ed”).

Psycholinguists try to delve under the surface to figure out what’s actually going on in the brain, and how well this matches up with our theoretical ideas of how languages fit together. For instance, linguists talk about morphemes like “-ed”, but do our brains actually work with morphemes when we’re producing or interpreting language? That is, do theoretical linguistic concepts have any psychological reality? An upcoming paper in the journal Cognition suggests an unusual way to investigate this: by testing synaesthetes.

Read more: The Guardian