How Abstract Concepts Are Represented in the Brain Across Cultures and Languages

Researchers at Carnegie Mellon University have explored the regions of the brain where concrete and abstract concepts materialize. A new study now explores if people who grow up in different cultures and speak different languages form these concepts in the same regions of the brain.

“We wanted to look across languages to see if our cultural backgrounds influence how we understand, how we perceive abstract ideas like justice,” said Roberto Vargas, a doctoral candidate in psychology at the Dietrich College of Humanities and Social Sciences and lead author on the study.

Vargas is continuing fundamental research in neural and semantic organization initiated by Marcel Just, the D.O. Hebb University Professor of Psychology. Just began this process more than 30 years ago by scanning the brains of participants using a functional magnetic resonance imaging (fMRI) machine.

His research team began by identifying the regions of the brain that light up for concrete objects, like an apple, and later moved to abstract concepts from physics like force and gravity.

The latest study took the evaluation of abstract concepts one step further by exploring the regions of the brain that fire for abstract objects based on language. In this case, the researchers studied people whose first language is Mandarin or English.

“The lab’s research is progress to study universalities of not only single concept representations, but also representations of larger bodies of knowledge such as scientific and technical knowledge,” Just said. “Cultures and languages can give us a particular perspective of the world, but our mental filing cabinets are all very similar.”

According to Vargas, there is a fairly generalizable set of hardware, or network of brain regions, that people leverage when thinking about abstract information, but how people use these tools varies depending on culture and the meaning of the word.

Read more: Neuroscience News

Five diseases attack language areas in brain

There are five different diseases that attack the language areas in the left hemisphere of the brain that slowly cause progressive impairments of language known as primary progressive aphasia (PPA), reports a new Northwestern Medicine study.

“We’ve discovered each of these diseases hits a different part of the language network,” said lead author Dr. M. Marsel Mesulam, director of Northwestern’s Mesulam Center for Cognitive Neurology and Alzheimer’s Disease. “In some cases, the disease hits the area responsible for grammar, in others the area responsible for word comprehension. Each disease progresses at a different rate and has different implications for intervention.”

This study is based on the largest set of PPA autopsies—118 cases—ever assembled.

It will be published April 20 in the journal Brain.

“The patients had been followed for more than 25 years, so this is the most extensive study to date on life expectancy, type of language impairment and relationship of disease to details of language impairment,” said Mesulam, also chief of behavioral neurology at Northwestern University Feinberg School of Medicine.

Patients with PPA were prospectively enrolled in a longitudinal study that included language testing and imaging of brain structure and brain function. The study included consent to brain donation at death.

An estimated one in 100,000 people have PPA, Mesulam said.

Read more: Medical Xpress

The Scientific Reason Singers Have a Knack for Language

What’s the difference between Mozart and Pavarotti? Well, one was a child prodigy and composer who systematically learned the rules of music at an early age — the other, a pitch-perfect expert at mimicry.  

Singers have a knack for foreign languages, most notably when it comes to pronunciation and accent because, like parrots, they mimic what they hear. It’s something that Pavarotti, who couldn’t read sheet music, did with his operatic singing. 

“The singer is the best with the accent,” says Susanne Reiterer, a neurolinguistics researcher at the University of Vienna in Austria. “A foreign accent is a piece of cake for them.”

Studies reveal that Heschl’s gyrus, a type of ridge on the brain’s surface that contains the primary auditory cortex, plays a significant role in musical aptitude and language aptitude, especially when there are a higher number of gyri. So some researchers believe that, based on the structure of the brain, some are simply born to be musicians. “Talking uses the same biological makeup as singing, so it must be related biologically and neurobiologically,” Reiterer says. “It’s almost like two sides of one coin.”

C López Ramón Y Cajal, a descendant of Santiago Ramón y Cajal — the founder of modern neurobiology — found that the gyri are formed mid-pregnancy and continue to grow as the fetus develops, as reported in a 2019 Medical Hypotheses article.

Rehearsing and training over time have an impact on the brain, but Reiterer says biology also plays a leading role. “You can change a lot by rehearsing, but something is pre-given as well,” Reiterer adds. “It’s 50/50 genes and environment, and if you have a strong pre-disposition [musically] then you have more power basically in your auditory areas. You can discriminate sounds better.” 

In Reiterer’s 2015 Frontiers in Human Neuroscience study, 96 participants categorized as instrumentalists, vocalists and non-musicians were tested for their abilities to imitate a language unknown to them — in this case, Hindi. Her team found vocalists had an advantage over instrumentalists, as they outperformed them in foreign language imitation, but both vocalists and instrumentalists outperformed non-musicians. This research also suggested that vocal motor training may allow singers to learn a language faster. 

And when children experience music early on in life, they’re able to achieve lifelong neuroplasticity, wrote Nina Kraus, a neuroscientist at Northwestern University, and co-author Travis White-Schwoch in American Scientist. At Northwestern’s Brainvolts lab, this team also found that the more musicians play, the more they benefit: Speech-sound processing ability builds up across one’s lifespan. Musicians exhibited better attention, sharper working memory, and better neural speech-sound processing as the number of practicing years increased. 

Even in the early 2000s, research suggested that long-term training in music and pitch recognition allows a person to better process the pitch patterns of a foreign language, a concept that Reiterer also explored in an Annual Review of Applied Linguistics article published this March. 

Reiterer has also investigated how a person’s initial aptitude develops due to factors such as biological maturing, socio-cultural factors and musical ability, to name a few, as reported in a May 2021 Neurobiology of Language article.

“It’s the body that feels where I have to move my tongue,” Reiterer says. “And this feeling has a correlation in the brain, proprioception. That is the key to good pronunciation and the key to a good singer.” 

So, for those tapping into both language and music — things just click. 

Read more: Scientific American

How Brains Seamlessly Switch Between Languages

Billions of people worldwide speak two or more languages. (Though the estimates vary, many sources assert that more than half of the planet is bilingual or multilingual.) One of the most common experiences for these individuals is a phenomenon that experts call “code switching,” or shifting from one language to another within a single conversation or even a sentence.

This month Sarah Frances Phillips, a linguist and graduate student at New York University, and her adviser Liina Pylkkänen published findings from brain imaging that underscore the ease with which these switches happen and reveal how the neurological patterns that support this behavior are very similar in monolingual people. The new study reveals how code switching—which some multilingual speakers worry is “cheating,” in contrast to sticking to just one language—is normal and natural. Phillips spoke with Mind Matters editor Daisy Yuhas about these findings and why some scientists believe bilingual speakers may have certain cognitive advantages.

Read more: Scientific American

Struggling to Learn a New Language? Blame It on Your Stable Brain

A study in patients with epilepsy is helping researchers understand how the brain manages the task of learning a new language while retaining our mother tongue. The study, by neuroscientists at UC San Francisco, sheds light on the age-old question of why it’s so difficult to learn a second language as an adult.

The somewhat surprising results gave the team a window into how the brain navigates the tradeoff between neuroplasticity — the ability to grow new connections between neurons when learning new things — and stability, which allows us to maintain the integrated networks of things we’ve already learned. The findings appear in the Aug. 30 issue of Proceedings of the National Academy of Sciences

“When learning a new language, our brains are somehow accommodating both of these forces as they’re competing against each other,” said Matt Leonard, PhD, assistant professor of neurological surgery and a member of the UCSF Weill Institute for Neurosciences.  

By using electrodes on the surface of the brain to follow high-resolution neural signals, the team found that clusters of neurons scattered throughout the speech cortex appear to fine-tune themselves as a listener gains familiarity with foreign sounds.  

“These are our first insights into what’s changing in the brain between first hearing the sounds of a foreign language and being able to recognize them,” said Leonard, who is a principal investigator on the study.  

Read more: University of California San Francisco

The more languages you speak, the easier it is for the brain to learn more

TOKYO, Japan — For those of us confined to knowing just one language, learning an additional dialect can feel impossible. Many bilinguals, however, marvel at the language skills of multilinguals (individuals fluent in three or more languages). Interestingly, a new Japanese study reports the collection of ground-breaking neurological evidence indicating lingual skills are additive. In other words, the more languages you speak, the easier it will be to learn another.

These findings potentially explain why one person fluent in English and Spanish may be in awe of someone who can speak German, Russian, and English. Meanwhile, that trilingual individual can’t believe it when he or she meets someone else who can speak German, Italian, French, English, and Russian.

“The traditional idea is, if you understand bilinguals, you can use those same details to understand multilinguals. We rigorously checked that possibility with this research and saw multilinguals’ language acquisition skills are not equivalent, but superior to those of bilinguals,” says study co-author Professor Kuniyoshi L. Sakai from the University of Tokyo in a release.

Researchers measured the brain activity of 21 bilingual and 28 multilingual study participants as each person attempted to decipher words and sentences written and spoken in Kazakh — a language no participant was familiar with at all. All subjects were native Japanese speakers, with most also being fluent in English. Some of the multilingual participants could speak up to five languages including Chinese, Russian, Korean, and German.

Read more: StudyFinds

How Language Hijacked the Brain

I’m sitting in the sun on one of the first mild days of the spring, talking with a modern-day flintknapper about the origins of human language. His name is Neill Bovaird, and he’s neither an archaeologist nor a linguist, just a 38-year-old bearded guy with a smartphone in his pocket who uses Stone Age technology to produce Stone Age tools. Bovaird has been flintknapping for a couple decades, and as we talk, the gok gok gok of him striking a smaller rock against a larger one punctuates our conversation. Every now and then the gokking stops: A new flake, sharper than a razor blade, breaks off in his palm.

I’ve come to see Bovaird, who teaches wilderness-survival skills in western Massachusetts, because I want to better understand the latest theories on the emergence of language—particularly a new body of research arguing that if not for our hominin ancestors’ hard-earned ability to produce complex tools, language as we know it might not have evolved at all. The research is occurring at the cutting-edge intersections of evolutionary biology, experimental archaeology, neuroscience, and linguistics, but much of it is driven by a very old question: Where did language come from?

Oren Kolodny, a biologist at Stanford University, puts the question in more scientific terms: “What kind of evolutionary pressures could have given rise to this really weird and surprising phenomenon that is so critical to the essence of being human?” And he has proposed a provocative answer. In a recent paper in the journal Philosophical Transactions of the Royal Society B, Kolodny argues that early humans—while teaching their kin how to make complex tools—hijacked the capacity for language from themselves.

Read more: The Atlantic

How does the brain process speech? We now know the answer, and it’s fascinating

Neuroscientists have known that speech is processed in the auditory cortex for some time, along with some curious activity within the motor cortex. How this last cortex is involved though, has been something of a mystery, until now. A new study by two NYU scientists reveals one of the last holdouts to a process of discovery which started over a century and a half ago. In 1861, French neurologist Pierre Paul Broca identified what would come to be known as “Broca’s area.” This is a region in the posterior inferior frontal gyrus.

This area is responsible for processing and comprehending speech, as well as producing it. Interestingly, a fellow scientist, whom Broca had to operate on, was post-op missing Broca’s area entirely. Yet, he was still able to speak. He couldn’t initially make complex sentences, however, but in time regained all speaking abilities. This meant another region had pitched in, and a certain amount of neuroplasticity was involved.

In 1871, German neurologist Carl Wernicke discovered another area responsible for processing speech through hearing, this time in the superior posterior temporal lobe. It’s now called Wernicke’s area. The model was updated in 1965 by the eminent behavioral neurologist, Norman Geschwind. The updated map of the brain is known as the Wernicke-Geschwind model.

Wernicke and Broca gained their knowledge through studying patients with damage to certain parts of the brain. In the 20th century, electrical brain stimulation began to give us an even greater understanding of the brain’s inner workings. Patients undergoing brain surgery in the mid-century were given weak electrical brain stimulation. The current allowed surgeons to avoid damaging critically important areas. But it also gave them more insight into what areas controlled what functions.

With the advent of the fMRI and other scanning technology, we were able to look at the activity in regions of the brain and how language travels across them. We now know that impulses associated with language go between Boca’s and Wernicke’s areas. Communication between the two help us understand grammar, how words sound, and their meaning. Another region, the fusiform gyrus, helps us classify words.

Read more: Big Think

Language Utilizes Ancient Brain Circuits That Predate Humans

A new paper by an international team of researchers presents strong evidence that language is learned using two general-purpose brain systems (declarative memory and procedural memory) that are evolutionarily ancient and not language specific. Contrary to popular belief, the researchers found that children learning their native language and adults learning a foreign language do not rely on brain circuitry specifically dedicated to language learning. Instead, language acquisition piggybacks on ancient, general-purpose neurocognitive mechanisms that preexist Homo sapiens.

These findings were published online January 29, 2018, in the journal Proceedings of the National Academy of Sciences (PNAS). For this analysis, the research team statistically synthesized the findings of 16 previous studies that examined language learning via declarative and procedural memory, which are two well-studied brain systems.

What Is the Difference Between Declarative Memory and Procedural Memory?

Declarative memory refers to crystallized knowledge that you could learn while sitting in a chair without having to practice finely-tuned motor coordination. Declarative memories, such as knowing all 50 states and the District of Columbia or memorizing SAT vocab words, can easily be described on a written test. On the flip side, procedural memory encompasses things like playing a musical instrument or riding a bicycle, which everyone must learn by actually performing the task. Over time, procedural memory becomes automatized in unconscious ways through practice, practice, practice.

Over a decade ago, when I created “The Athlete’s Way” program to optimize sports performance, the foundation of my coaching method was to take a dual-pronged approach that targeted declarative (explicit) memory and procedural (implicit) memory separately. Notably, the discovery that ancient brain circuits are used to learn a language—and are also used to master sports—corroborates that these neurocognitive systems have multiple purposes.

“These brain systems are also found in animals. For example, rats use them when they learn to navigate a maze,” co-author Phillip Hamrick of Kent State University in Ohio said in a statement. “Whatever changes these systems might have undergone to support language, the fact that they play an important role in this critical human ability is quite remarkable.”

Interestingly, results of this analysis showed that memorizing vocabulary words used in a language relied on declarative memory. However, grammar and syntax, which allow us to fluidly combine words into sentences that follow the rules of a language, relies more on procedural memory.

When acquiring their native language, children utilize procedural memory to master grammar and syntax without necessarily “knowing” the rules. However, when adults begin to learn a second language, grammatical rules are initially memorized using declarative memory. As would be expected, grammar and syntax switch to procedural memory systems at later stages of language acquisition as someone becomes more fluent.

“The findings have broad research, educational, and clinical implications” co-author Jarrad Lum of Deakin University in Melbourne, Australia said in a statement.

Read more: Psychology Today

How you learned a second language influences the way your brain works

Over the past few years, you might have noticed a surfeit of articles covering current research on bilingualism. Some of them suggest that it sharpens the mind, while others are clearly intended to provoke more doubt than confidence, such as Maria Konnikova’s “Is Bilingualism Really an Advantage?” (2015) in The New Yorker. The pendulum swing of the news cycle reflects a real debate in the cognitive science literature, wherein some groups have observed effects of bilingualism on non-linguistic skills, abilities and function, and others have been unable to replicate these findings.

Despite all the fuss that has been made about the “bilingual advantage,” most researchers have moved on from the simplistic ‘is there an advantage or not’ debate. Rather than asking whether bilingualism per se confers a cognitive advantage, researchers are now taking a more nuanced approach by exploring the various aspects of bilingualism to better understand their individual effects.

To give an idea of the nuances I am talking about, consider this: there is more than one type of bilingualism. A “simultaneous bilingual” learns two languages from birth; an “early sequential bilingual” might speak one language at home but learn to speak the community language at school; and a “late sequential bilingual” might grow up with one language and then move to a country that speaks another. The differences between these three types are not trivial—they often lead to different levels of proficiency and fluency in multiple aspects of language, from pronunciation to reading comprehension.

Read more: Quartz

6 Potential Brain Benefits Of Bilingual Education

Brains, brains, brains. One thing we’ve learned at NPR Ed is that people are fascinated by brain research. And yet it can be hard to point to places where our education system is really making use of the latest neuroscience findings.

But there is one happy nexus where research is meeting practice: Bilingual education. “In the last 20 years or so, there’s been a virtual explosion of research on bilingualism,” says Judith Kroll, a professor at the University of California, Riverside.

Again and again, researchers have found, “bilingualism is an experience that shapes our brain for a lifetime,” in the words of Gigi Luk, an associate professor at Harvard’s Graduate School of Education.

At the same time, one of the hottest trends in public schooling is what’s often called dual-language or two-way immersion programs.

Read more: NPR

“Whistled Languages” Reveal How the Brain Processes Information

Before electronic communications became a ubiquitous part of people’s lives, rural villagers created whistled versions of their native languages to speak from hillside to hillside or even house to house.

Herodotus mentioned whistled languages in the fourth book of his work The Histories, but until recently linguists had done little research on the sounds and meanings of this now endangered form of communication.

New investigations have discovered the presence of whistled speech all over the globe. About 70 populations worldwide communicate this way, a far greater number than the dozen or so groups that had been previously identified.

Linguists have tried to promote interest in these languages—and schools in the Canary Islands now teach its local variant. A whistled language represents both a cultural heritage and a way to study how the brain processes information.

Read more: Scientific American