The more languages you speak, the easier it is for the brain to learn more

TOKYO, Japan — For those of us confined to knowing just one language, learning an additional dialect can feel impossible. Many bilinguals, however, marvel at the language skills of multilinguals (individuals fluent in three or more languages). Interestingly, a new Japanese study reports the collection of ground-breaking neurological evidence indicating lingual skills are additive. In other words, the more languages you speak, the easier it will be to learn another.

These findings potentially explain why one person fluent in English and Spanish may be in awe of someone who can speak German, Russian, and English. Meanwhile, that trilingual individual can’t believe it when he or she meets someone else who can speak German, Italian, French, English, and Russian.

“The traditional idea is, if you understand bilinguals, you can use those same details to understand multilinguals. We rigorously checked that possibility with this research and saw multilinguals’ language acquisition skills are not equivalent, but superior to those of bilinguals,” says study co-author Professor Kuniyoshi L. Sakai from the University of Tokyo in a release.

Researchers measured the brain activity of 21 bilingual and 28 multilingual study participants as each person attempted to decipher words and sentences written and spoken in Kazakh — a language no participant was familiar with at all. All subjects were native Japanese speakers, with most also being fluent in English. Some of the multilingual participants could speak up to five languages including Chinese, Russian, Korean, and German.

Read more: StudyFinds

Foot greetings and face condoms: Germans coin 1,200 new words about the pandemic

People in Germany have coined more than 1,200 new words about COVID-19 since the pandemic began. 

There’s coronamüde, which literally translates to “corona-tired,” to describe pandemic fatigue. If that doesn’t quite cut it, you could go with the more dramatic coronaangst. Either feeling is likely to set in when you’re overzoomed from too many video conferencing calls.

“I think we are now in a very extraordinary situation and we have many different new things in our world. And I think when new, very relevant things happen in our world, these new things are looking for a name,” Christine Möhrs, a lexographer who has been tracking the new terminology, told As It Happens guest host Peter Armstrong.

“If we can talk about things and have names for them, then we can, I think, communicate with each other, and it’s possible for people to have an exchange about the current events and the current crisis. And I think this is a very important human mechanism.”

Read more: CBC

‘Anumeric’ people: What happens when a language has no words for numbers?

Numbers do not exist in all cultures. There are numberless hunter-gatherers embedded deep in Amazonia, living along branches of the world’s largest river tree. Instead of using words for precise quantities, these people rely exclusively on terms analogous to “a few” or “some.”

In contrast, our own lives are governed by numbers. As you read this, you are likely aware of what time it is, how old you are, your checking account balance, your weight and so on. The exact (and exacting) numbers we think with impact everything from our schedules to our self-esteem.

But, in a historical sense, numerically fixated people like us are the unusual ones. For the bulk of our species’ approximately 200,000-year lifespan, we had no means of precisely representing quantities. What’s more, the 7,000 or so languages that exist today vary dramatically in how they utilize numbers.

Speakers of anumeric, or numberless, languages offer a window into how the invention of numbers reshaped the human experience. In a new book, I explore the ways in which humans invented numbers, and how numbers subsequently played a critical role in other milestones, from the advent of agriculture to the genesis of writing.

Read more: The Conversation

Robo-writers: the rise and risks of language-generating AI

In June 2020, a new and powerful artificial intelligence (AI) began dazzling technologists in Silicon Valley. Called GPT-3 and created by the research firm OpenAI in San Francisco, California, it was the latest and most powerful in a series of ‘large language models’: AIs that generate fluent streams of text after imbibing billions of words from books, articles and websites. GPT-3 had been trained on around 200 billion words, at an estimated cost of tens of millions of dollars.

The developers who were invited to try out GPT-3 were astonished. “I have to say I’m blown away,” wrote Arram Sabeti, founder of a technology start-up who is based in Silicon Valley. “It’s far more coherent than any AI language system I’ve ever tried. All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future.”

OpenAI’s team reported that GPT-3 was so good that people found it hard to distinguish its news stories from prose written by humans1. It could also answer trivia questions, correct grammar, solve mathematics problems and even generate computer code if users told it to perform a programming task. Other AIs could do these things, too, but only after being specifically trained for each job.

Large language models are already business propositions. Google uses them to improve its search results and language translation; Facebook, Microsoft and Nvidia are among other tech firms that make them. OpenAI keeps GPT-3’s code secret and offers access to it as a commercial service. (OpenAI is legally a non-profit company, but in 2019 it created a for-profit subentity called OpenAI LP and partnered with Microsoft, which invested a reported US$1 billion in the firm.) Developers are now testing GPT-3’s ability to summarize legal documents, suggest answers to customer-service enquiries, propose computer code, run text-based role-playing games or even identify at-risk individuals in a peer-support community by labelling posts as cries for help.

Despite its versatility and scale, GPT-3 hasn’t overcome the problems that have plagued other programs created to generate text. “It still has serious weaknesses and sometimes makes very silly mistakes,” Sam Altman, OpenAI’s chief executive, tweeted last July. It works by observing the statistical relationships between the words and phrases it reads, but doesn’t understand their meaning.

Accordingly, just like smaller chatbots, it can spew hate speech and generate racist and sexist stereotypes, if prompted — faithfully reflecting the associations in its training data. It will sometimes give nonsensical answers (“A pencil is heavier than a toaster”) or outright dangerous replies. A health-care company called Nabla asked a GPT-3 chatbot, “Should I kill myself?” It replied, “I think you should.”

Read more: Nature

In a Momentous Discovery, Scientists Show Neanderthals Could Produce Human-Like Speech

Our Neanderthal cousins had the capacity to both hear and produce the speech sounds of modern humans, a new study has found.

Based on a detailed analysis and digital reconstruction of the structure of the bones in their skulls, the study settles one aspect of a decades-long debate over the linguistic capabilities of Neanderthals.

“This is one of the most important studies I have been involved in during my career,” said palaeoanthropologist Rolf Quam of Binghamton University.

“The results are solid and clearly show the Neanderthals had the capacity to perceive and produce human speech. This is one of the very few current, ongoing research lines relying on fossil evidence to study the evolution of language, a notoriously tricky subject in anthropology.”

The notion that Neanderthals (Homo neanderthalis) were much more primitive than modern humans (Homo sapiens) is outdated, and in recent years a growing body of evidence demonstrates that they were much more intelligent than we once assumed. They developed technologycrafted toolscreated art and held funerals for their dead.

Whether they actually spoke with each other, however, has remained a mystery. Their complex behaviors seem to suggest that they would have had to be able to communicate, but some scientists have contended that only modern humans have ever had the mental capacity for complex linguistic processes.

Whether that’s the case is going to be very difficult to prove one way or another, but the first step would be to determine if Neanderthals could produce and perceive sounds in the optimal range for speech-based communication.

Read more: Science Alert

Saving Languages From Extinction, With The Help Of AI

Let’s begin with a little quiz: Across the earth, there are 7 continents and 197 countries. How many languages are spoken?

The answer is around 7,000, but if this number surprises you, it’s because you suffer from the distorted perspective that half of the 7.8 billion inhabitants of the planet express themselves or communicate through only about 20 of them (Arabic, English, Spanish, French, Hindi, Mandarin, Portuguese…), while the other 97% of these 7,000 languages have a total number of speakers that does not exceed 4% of the population.

Our world linguistic heritage, as rich it may be, is very fragile. The overwhelming majority of these 7,000 languages have no written tradition, and today are only spoken by a handful of old people. This heritage is both the fruit and the guarantor of humans’ cultural diversity, and is no less significant than the biodiversity of plant and animal species. The crisis it faces can be considered the sixth major extinction that threatens the world.

“We estimate that 50% of the 7,000 languages will disappear by the end of this century, a rate to be compared with the 26% of mammal species or 14% of bird species threatened with extinction according to the International Union for Conservation of Nature,” says Evangelia Adamou, a linguist at the CNRS laboratory, LACITO (Languages and Civilizations with an Oral Tradition).

This threat of massive linguistic extinction is what motivated researchers to create the Pangloss collection in 1995, named after a character in Voltaire’s “Candide,” whose name in Greek means, “all languages.” Equipped with a website making it accessible to the general public, this collection is to linguistic diversity what protected areas are to biodiversity. Its sound library has been enriched over the years and now contains more than 3,600 audio or video recordings in 170 languages, nearly half of which are transcribed and annotated.

Read more: Worldcrunch

A New Way to Trace the History of Sci-Fi’s Made-Up Words

One thing nerds like to argue about is what nerds are allowed to argue about. If you agree to stipulate that science fiction is often one of those things—and, hey, we could argue about that—then a problem to solve is the boundaries of that genre, the what-it-is and what-it-isn’t. That’s not straightforward. Finding the edges of science fiction is like taking a walk around a hypercube in zero-gee; you keep bumping into walls and falling into other dimensions. Reasonable people don’t even agree on when it started— FrankensteinThe Time Machine? Gilgamesh? A story where a ghost kills people is horror; what if a robot did it? What if the universe has robots and spaceships but also magic and destiny?

It does seem all but inarguably true about science fiction, though, that the genre radiates neologisms (new words) and neosemes (new concepts made of old words) like an overloading warp core emits plasma and neutrinos. Just to be clear, that’s a lot.

Don’t get mad, romance and mystery fans; you are great. But the point is, if you’re doing it right, science fiction packs in new concepts, even entirely new languages—Klingon, for example, and that inkblot thing the heptapods squirted in Arrival. (What’s that you say? Fantasy has Elvish and Dothraki, why am I leaving those out? Let’s take that to the comments.) It’s where writers need words—or, if need is too strong, maybe want—for rockets propelled by impossible technology, people who are also machines, guns that shoot light instead of bullets, and all sorts of other things that don’t exist and therefore don’t (yet) have names. “Naming things well—and I’m not purporting to be someone who does that—but as a reader it’s so satisfying, because it can be exposition without being expository,” says Charles Yu, occasional WIRED contributor and author of How to Live Safely in a Science Fictional Universe and the National Book Award–winning Interior Chinatown. “And it’s so much fun too.”

That doesn’t mean it’s easy, of course. Those neologisms and neosemes exist within an individual story, but also in a larger conversation among every story, in a genre with fiercely loyal adherents. “The thing about making up new terminology—and this is a place that writers can fall down—is that, like anything else, it has to make sense not only within the universe that you’re building but also in the universe of the reader,” says John Scalzi, author of Old Man’s War and The Last Emperox, among other sci-fi works. “It has to be a term that is easily graspable, so they can put it into their lexicon and not have to think about it again, but at the same time it wants to be distinctive enough that when they see it they are reminded of you.”

Read more: Wired

Machine learning has been used to automatically translate long-lost languages

In 1886, the British archaeologist Arthur Evans came across an ancient stone bearing a curious set of inscriptions in an unknown language. The stone came from the Mediterranean island of Crete, and Evans immediately traveled there to hunt for more evidence. He quickly found numerous stones and tablets bearing similar scripts and dated them from around 1400 BCE.

That made the inscription one of the earliest forms of writing ever discovered. Evans argued that its linear form was clearly derived from rudely scratched line pictures belonging to the infancy of art, thereby establishing its importance in the history of linguistics.

He and others later determined that the stones and tablets were written in two different scripts. The oldest, called Linear A, dates from between 1800 and 1400 BCE, when the island was dominated by the Bronze Age Minoan civilization.

The other script, Linear B, is more recent, appearing only after 1400 BCE, when the island was conquered by Mycenaeans from the Greek mainland.

Evans and others tried for many years to decipher the ancient scripts, but the lost languages resisted all attempts. The problem remained unsolved until 1953, when an amateur linguist named Michael Ventris cracked the code for Linear B.

His solution was built on two decisive breakthroughs. First, Ventris conjectured that many of the repeated words in the Linear B vocabulary were names of places on the island of Crete. That turned out to be correct.

His second breakthrough was to assume that the writing recorded an early form of ancient Greek. That insight immediately allowed him to decipher the rest of the language. In the process, Ventris showed that ancient Greek first appeared in written form many centuries earlier than previously thought.

Ventris’s work was a huge achievement. But the more ancient script, Linear A, has remained one of the great outstanding problems in linguistics to this day.

It’s not hard to imagine that recent advances in machine translation might help. In just a few years, the study of linguistics has been revolutionized by the availability of huge annotated databases, and techniques for getting machines to learn from them. Consequently, machine translation from one language to another has become routine. And although it isn’t perfect, these methods have provided an entirely new way to think about language.

Read more: MIT Technology Review

Iceland is inventing a new vocabulary for a high-tech future

Every morning, Iceland’s language planners begin their day by taking off their shoes at the communal shoe rack in their office and slipping into pairs of soft clogs. As tourists begin to fill the alleyways of downtown Reykjavik with a faint babble of English, French, Chinese, and Italian, the language planners shuffle quietly back into their fight to save the Icelandic language from extinction.

The Language Planning Department, a small government-funded office of linguists with a rotating cast of subject experts is in charge of integrating new and foreign concepts into the millennia-old Icelandic language. For decades, the department and its predecessors have invented new Icelandic words to keep up with civilizational advances abroad, from the invention of the computer (tolva) to the rise of political correctness (pólitísk rétthugsun).

Only about 320,000 people in the world speak Icelandic. Most are already bi- or trilingual, switching with ease into English or another language when abroad. The language planners’ mission is to ensure that the country’s citizens switch back to Icelandic at home. A vibrant national language, they say, is as vital to Iceland’s sovereignty as the roads that connect its moonscape plains and coastal towns.

Digital extinction

More people around the world are literate than ever before in our species’ history—and yet a mass extinction threatens the world’s languages. In a phenomenon known to linguists as “digital death,” 7,000 languages globally are at risk of falling into disuse because they simply are not that useful online. Netflix, for example, does not offer Icelandic subtitles—much less Wolof or Welsh. Wikipedia offers only about 47,000 articles in Icelandic, compared to more than 47 million in English.

Emerging speech recognition technology could narrow the language funnel even further. Voice assistants Cortana, Alexa, Siri and Google Assistant speak only 22 languages in total.

“Many have been concerned for the last 10 or 15 years that we are losing this battle in language technology,” says Johannes Sigtrygsson, a researcher and dictionary specialist at the Language Planning Department. “That English will become the language of these smart things, like Alexa and Google, things that you talk to. If you order them to do something, you will only be able to do it in English!” It’s the prospect of speaking a foreign language to devices in your own kitchen or bedroom that make this linguistic conundrum seem particularly distasteful, his examples suggest. Imagine coming home and having to look up the Martian word for “half-and-half,” because that’s all your supposedly smart refrigerator understands.

And so the language planners, led by linguist Ari Páll Kristinsson, are working furiously to match every English word or concept with an Icelandic one—giving young Icelanders no excuse for depending on loanwords learned online.

A onetime colony of Denmark, the island nation’s organized resistance to linguistic imperialism dates back centuries, and its methods are well-established. For Ari personally, the fight to make technology accessible in Icelandic dates back decades. In a 1998 interview with the Los Angeles Times, Ari criticized Microsoft for “destroying” Iceland’s language preservation efforts by refusing to translate Windows into Icelandic.

Read more: Quartz

Do You See What I See?

In a Candoshi village in the heart of Peru, anthropologist Alexandre Surrallés puts a small colored chip on a table and asks, “Ini tamaara?” (“How is it?” or “What is it like?”). What Surrallés would like to ask is, “What color is this?” But the Candoshi, a tribe of some 3,000 people living on the upper banks of the Amazon River, don’t have a word for the concept of color. Nor are their answers to the question he does ask familiar to most Westerners. In this instance, a lively discussion erupts between two Candoshi about whether the chip, which Surrallés would call amber or yellow-orange, looks more like ginger or fish spawn.

This moment in July 2014 was just one among many similar experiences Surrallés had during a total of three years living among the Candoshi since 1991. His fieldwork led Surrallés to the startling conclusion that these people simply don’t have color words: reliable descriptors for the basic colors in the world around them. Candoshi children don’t learn the colors of the rainbow because their community doesn’t have words for them.

Though his finding might sound remarkable, Surrallés, who is with the National Center for Scientific Research in Paris, isn’t the first to propose that this cultural phenomenon exists. Anthropologists in various corners of the world have reported on other small tribes who also don’t seem to have a staple vocabulary for color. Yet these conclusions fly in the face of those found in the most influential book on the topic: The World Color Survey, published in 2009, which has at its very heart the hypothesis that every culture has basic color words for at least part of the rainbow.

The debate sits at the center of an ongoing war in the world of color research. On the one side stand “universalists,” including the authors of The World Color Survey and their colleagues, who believe in a conformity of human perceptual experience: that all people see and name colors in a somewhat consistent way. On the other side are “relativists,” who believe in a spectrum of experience and who are often offended by the very notion that a Westerner’s sense of color might be imposed on the interpretation of other cultures and languages. Many researchers, like Surrallés, say they stand in the middle: While there are some universals in human perception, Surrallés argues, color terms don’t seem to be among them.

It is almost incomprehensible at first to imagine that the rainbow is not viewed similarly by all people, that there might be more, or fewer, colors in the world than we thought, or that someone might not bother to give colors a name. And yet once one gets beyond the initial, startling blow of these ideas, they begin to seem obvious. There are, after all, no actual lines in a real rainbow. There’s no reason to think that orange is any more or less a legitimate color than, say, cyan, or that one culture’s list of colors is more “real” than another’s.

Or is there?

Read more: Sapiens

A change in our diets may have changed the way we speak

AS THE SAYING goes, we are what we eat—but does that aspect of our identity carry over to the languages we speak?

In a new study in Science, a team of linguists at the University of Zurich uses biomechanics and linguistic evidence to make the case that the rise of agriculture thousands of years ago increased the odds that populations would start to use sounds such as f and v. The idea is that agriculture introduced a range of softer foods into human diets, which altered how humans’ teeth and jaws wore down with age in ways that made these sounds slightly easier to produce.

“I hope our study will trigger a wider discussion on the fact that at least some aspects of language and speech—and I insist, some—need to be treated as we treat other complex human behaviors: laying between biology and culture,” says lead study author Damián Blasi.

If confirmed, the study would be among the first to show that a culturally induced change in human biology altered the arc of global languages. Blasi and his colleagues stress that changes in tooth wear didn’t guarantee changes in language, nor did they replace any other forces. Instead, they argue that the shift in tooth wear improved the odds of sounds such as f and v emerging. Some scientists in other fields, such as experts in tooth wear, are open to the idea. (Today, many scientists are racing to save languages that are dying out.)

“[Tooth wear] is a common pattern with deep evolutionary roots; it’s not specific for humans [and] hominins but also present in the great apes,” University of Zurich paleoanthropologists Marcia Ponce de León and Christoph Zollikofer, who didn’t participate in the study, say in a joint email. “Who could have imagined that, after millions of years of evolution, it will have implications for human language diversity?” (Another study shows how ancient cave art may be linked to language.)

While the study relies on various assumptions, “I think the authors build a very plausible case,” adds Tecumseh Fitch, an expert on bioacoustics at the University of Vienna who wasn’t involved with the work. “This is probably the most convincing study yet showing how biological constraints on language change could themselves change over time due to cultural changes.”

But many linguists have defaulted to skepticism, out of a broader concern about tracing differences in languages back to differences in biology—a line of thinking within the field that has led to ethnocentrism or worse. Based on the world’s huge variety of tongues and dialects, most linguists now think that we all broadly share the same biological tools and sound-making abilities for spoken languages.

“We really need to know that the small [average] differences observed in studies like this aren’t swamped by the ordinary diversity within a community,” Adam Albright, a linguist at MIT who wasn’t involved with the study, says in an email.

Read more: National Geographic

There’s a Weird Similarity Between Chimp Communication And Human Language

Behind this sentence lies a solid bedrock of mathematics, one that has been shown to govern all human languages.

Linguists have found the hoots and hollers, gestures and expressions used by chimpanzees obey some of the same basic principles, demonstrating the foundations of language have deep evolutionary roots.

A study led by the University of Roehampton in the UK analysed hundreds of video recordings of chimpanzees living in Uganda’s Budongo Forest reserve, categorising and measuring the characteristics of 58 types of playful gesture.

They were looking for hints of two rules common to all forms of human communication – Zipf’s law of abbreviation, and Menzerath’s law on the complexity of linguistic constructs.

Research had already been carried out on chimpanzee hooting and panting, showing these rules at work. But in closer quarters chimps communicate with more visual signs, leaving researchers a whole other linguistic system to analyse.

Zipf’s law describes an inverse relationship between how often we use a word, and it’s ranking in respect to other words. For example, the second most repeated word in any language will be used half as often as the first.

This quirk was figured out by a linguist named George Kingsley Zipf, who also noticed the higher a word happens to be in this list, the more abbreviated it happens to be.

Take the top five words in the English language as an example – the, be, and, of, and a. They’re pretty snappy compared to the words ranked at 500 – value, international, building, and action.

This doesn’t only apply to every other language spoken by humans; it’s been shown to be at work in the vocalisations of macaques and dolphins, suggesting efficiency lies at the core of many forms of animal communication.

Read more: Science Alert