Ancestors May Have Created ‘Iconic’ Sounds As Bridge To First Languages

The ‘missing link’ that helped our ancestors to begin communicating with each other through language may have been iconic sounds, rather than charades-like gestures — giving rise to the unique human power to coin new words describing the world around us, a new study reveals.

It was widely believed that, in order to get the first languages off the ground, our ancestors first needed a way to create novel signals that could be understood by others, relying on visual signs whose form directly resembled the intended meaning.

However, an international research team, led by experts from the University of Birmingham and the Leibniz-Centre General Linguistics (ZAS), Berlin, have discovered that iconic vocalisations can convey a much wider range of meanings more accurately than previously supposed.

The researchers tested whether people from different linguistic backgrounds could understand novel vocalizations for 30 different meanings common across languages and which might have been relevant in early language evolution.

These meanings spanned animate entities, including humans and animals (child, man, woman, tiger, snake, deer), inanimate entities (knife, fire, rock, water, meat, fruit), actions (gather, cook, hide, cut, hunt, eat, sleep), properties (dull, sharp, big, small, good, bad), quantifiers (one, many) and demonstratives (this, that).

The team published their findings in Scientific Reports, highlighting that the vocalizations produced by English speakers could be understood by listeners from a diverse range of cultural and linguistic backgrounds. Participants included speakers of 28 languages from 12 language families, including groups from oral cultures such as speakers of Palikur living in the Amazon forest and speakers of Daakie on the South Pacific island of Vanuatu. Listeners from each language were more accurate than chance at guessing the intended referent of the vocalizations for each of the meanings tested.

Read more: The Archaeology News Network

My belly is angry, my throat is in love: Body parts and emotions in Indigenous languages

Many languages in the world allude to body parts to describe emotions and feelings, as in “broken-heart,” for instance. While some have just a few expressions like this, Australian Indigenous languages tend use a lot of them, covering many parts of the body: from “flowing belly” for “feel good” to “burning throat” for “be angry” to “staggering liver” meaning “to mourn.”

As a linguist, I first learnt this when I worked with speakers of Dalabon, Rembarrnga, Kune, Kunwinjku and Kriol languages in the Top End, as they taught me their own words to describe emotions

Recently, with the help of my collaborator Kitty-Jean Laginha, I have looked systematically for such expressions in dictionaries and word lists from 67 Indigenous languages across Australia. We found at least 30 distinct body parts involved in about 800 emotional expressions.

Where do these body-emotion associations come from? Are they specific to Australian languages, or do they occur elsewhere in the world as well? There are no straightforward answers to these questions. Some expressions seem to be specific to the Australian continent, others are more widespread. As for the origins of the body-emotion association, our study suggests several possible explanations.

Firstly, some body parts are involved in emotional behaviors. For instance, we turn our back on people when we are upset with them. In some Australian Indigenous languages, “turn back” can mean “hold grudge” as a result of this.

Secondly, some body parts are involved in our physiological responses to emotion. For instance, fear can make our heart beat faster. Indeed, in some languages “heart beats fast” can mean “be afraid.”

Thirdly, some body parts represent the mind. This can be a bridge to emotions linked to intellectual states, like confusion or hesitation. For instance, “have a sore ear” can mean “be confused.”

Read more: phys.org

Pace of prehistoric human innovation could be revealed by ‘linguistic thermometer’

Multi-disciplinary researchers at The University of Manchester have helped develop a powerful physics-based tool to map the pace of language development and human innovation over thousands of years – even stretching into pre-history before records were kept.

Tobias Galla, a professor in theoretical physics, and Dr Ricardo Bermúdez-Otero, a specialist in historical linguistics, from The University of Manchester, have come together as part of an international team to share their diverse expertise to develop the new model, revealed in a paper entitled ‘Geospatial distributions reflect temperatures of linguistic feature’ authored by Henri Kauhanen, Deepthi Gopal, Tobias Galla and Ricardo Bermúdez-Otero, and published by the journal Science Advances.

Professor Galla has applied statistical physics – usually used to map atoms or nanoparticles – to help build a mathematically-based model that responds to the evolutionary dynamics of language. Essentially, the forces that drive language change can operate across thousands of years and leave a measurable “geospatial signature”, determining how languages of different types are distributed over the surface of the Earth.

Read more: EurekAlert!

Can language slow down time?

What if the language you spoke caused you to perceive time differently?

Does that sound like magic realism? Close: it’s economics. Some recent research papers published in economics journals – notably a 2013 paper by Keith Chen of Yale and a 2018 paper by three Australian economists – have proposed that languages that grammatically distinguish future from present cause their speakers to plan less, save less, even care less for the environment.

That sound you just heard was thousands of linguists rolling their eyes and groaning “Whorf”.

Bejamin Lee Whorf was an inspector for a fire insurance company, and he saw that language could cause safety problems. People were careless around empty gasoline drums because they were “empty” – except that, in fact, they were filled with gasoline vapour, which can explode. This spurred him to study and write about language.

Whorf spent time with the Hopi people of northeastern Arizona. He observed that they had no grammatical distinctions for future and past and no way to count periods of time. He looked at their cultural practices and concluded that the Hopi see time quite differently from us, and that concepts that seem obvious to us – such as “tomorrow is another day” – had no meaning for them.

His publication of these ideas in 1939 set the philosophy of language on fire. From Whorf’s proposals and those of his teacher, a Yale professor named Edward Sapir, came what Whorf called the Linguistic Relativity Hypothesis, commonly known as the Sapir-Whorf hypothesis. Its mildest form is that language can affect how we think; its strongest form is that we can’t think about things our language doesn’t let us talk about.

Over time, these explosive ideas – and much of Whorf’s data – were found to be mostly… empty. In 1983, a researcher named Ekkehart Malotki published Hopi Time, a thick volume detailing his research on the Hopi and their language, which proceeded with a long, slow burn to incinerate Whorf’s edifice of data and theory about the Hopi. And with the demise of the strong version of the Sapir-Whorf hypothesis came a mistrust of any ideas of linguistic relativity.

Read more: BBC

When Genetics and Linguistics Challenge the Winners’ Version of History

Two conquering empires and more than 500 years of colonial rule failed to erase the cultural and genetic traces of indigenous Peruvians, a new study finds. This runs contrary to historical accounts that depict a complete devastation of northern Peru’s ancient Chachapoya people by the Inca Empire.

The Chachapoyas—sometimes referred to as “Warriors of the Clouds” because they made their home in the Amazonian cloud forests—are mainly known today for what they built: fortified hilltop fortresses and intricate sarcophagi overlooking their villages from sheer, inaccessible cliff sides. The little we know about their existence before the arrival of the Spanish comes to us via an oral history passed along by the Inca to their Spanish conquerors—in other words, the winners’ version of history.

Now, a study tracking the genetic and linguistic history of modern Peruvians is revealing that the Chachapoyas may have fared better than these mainstream historical accounts would have us believe. As Chiara Barbieri, a post-doctoral researcher from the Max Planck Institute for the Science of Human History, puts it: “Some of these historical documents were exaggerated and a little bit biased in favor of the Inca.”

Many of these early reports stem from two historians who essentially wrote the book on the Inca Empire during the time period from 1438 to 1533: Inca Garcilaso de la Vega, the son of a conquistador and Incan princess who published chronicles on the Inca Empire in the early 17th century, and Pedro de Cieza de Leon, a Spanish conquistador from a family of Jewish converts who travelled through the area in the mid-16th century, and wrote one of the first lengthy histories of the Inca people and Spanish conquests.

According to Cieza de Leon’s account, it was in the 1470s, about midway through the Inca Empire, that paramount leader Túpac Inca Yupanqui first attacked the Chachapoyas in what is today northern Peru. He quickly found that the Warriors of the Clouds were not the type to give up without a fight. Cieza de Leon described the first battle between Yupanqui and the Chachapoyas in the first part of his Chronicle of Peru:

The Chachapoyas Indians were conquered by them, although they first, in order to defend their liberty, and to live in ease and tranquillity, fought with such fury that the Yncas fled before them. But the power of the Yncas was so great that the Chachapoyas Indians were finally forced to become servants to those Kings, who desired to extend their sway over all people.

Beaten but not defeated, the Chachapoyas rebelled again during the reign of Yupanqui’s son after the latter died. Huayna Capac had to re-conquer the region, but encountered many of the difficulties his father had, according to Cieza de Leon:

Among the Chachapoyas the Inca met with great resistance; insomuch that he was twice defeated by the defenders of their country and put to flight. Receiving some succour, the Inca again attacked the Chachapoyas, and routed them so completely that they sued for peace, desisting, on their parts, from all acts of war. The Inca granted peace on conditions very favourable to himself, and many of the natives were ordered to go and live in Cuzco, where their descendants still reside.

De la Vega’s account, written nearly 50 years after Cieza de Leon’s in the early 17th century, tells a similar story of a decisive conquest and subsequent forced dispersal of the Chachapoyas around the Inca Empire. The Inca often used this strategy of forced dispersal, which they referred to by the Quechua word mitma, to dissuade future rebellion in the vast region they came to control. (Quechua, according to the new study, is the most widely-spoken language family of the indigenous Americas.)

“We have some records in the Spanish history that the Inca had replaced the population completely, moving the Chachapoyas for hundreds of kilometers and replacing them with people from other parts of the empire,” Barbieri says.

These and other accounts are some of the only historical notes we have of the Inca, who lacked any system of writing other than the quipu, or knot records. The quipu system of cords used different types of knots to indicate numbers, and was used for accounting and other records.

“We know a lot about what the Inca did because Inca kings, or high officials, were talking to Spanish historians,” Barbieri says. “So the piece of history of this region that we know is very much biased towards what the Inca elite were telling the Spaniards. What we don’t know was what happened before that—everything that happened before the 16th century.”

That is now changing, thanks to a genetic study on which Barbieri was lead author, published recently in Scientific Reports.

Read more: Smithsonian

Carrying On His Great Grandfather’s Work, A Kansas Professor Helps Keep Their Language Alive

As a kid, Andrew McKenzie had an unusual affinity for languages.

He took French in high school (because everyone else was taking Spanish). But that wasn’t enough.

“I started to teach myself different languages, like Latin and Greek and Basque and Turkish,” he remembers. “I would drive into the city to a bookstore, and they’d have a section with language books. I’d say, ‘I’m just going to learn this language because the book has the prettiest font.'”

So it’s not surprising that McKenzie ended up as a professor of linguistics at the University of Kansas. But it turns out there’s another reason why he’s uniquely qualified for his area of research, which involves documenting the endangered language of Oklahoma’s Kiowa people.

A languages dies when children stop learning it naturally (as opposed to being taught at school) and when there’s no documentation. But if it’s been documented, a language can be revived (the best example of this is Hebrew).

The Kiowa tribe is small, with only about 12,000 members, many of them spread out around the country. Most of the native speakers are in Southwest Oklahoma.

“There are only a few dozen speakers, and some people would even estimate fewer,” McKenzie says. “And a lot of them are in their 80s and 90s.”

By one estimate, Kiowa is among 165 endangered languages in the United States; thousands of languages around the world are also in danger of extinction.

Read more: KCUR 89.3

Evolutionary biology can help us understand how language works

As a linguist I dread the question, “what do you do?”, because when I answer “I’m a linguist” the inevitable follow-up question is: “How many languages do you speak?” That, of course, is not the point. While learning languages is a wonderful thing to do, academic linguistics is the scientific study of language.

What I do in my work is to try to understand how and why languages are the way they are. Why are there so many in some places and so few in others? How did languages develop so many different ways of fulfilling the same kinds of communicative tasks? What is uniquely human about language, and how do the human mind and language shape each other? This is something of a new direction in linguistics. The old-school study of language history was more concerned with language for its own sake: understanding the structure of languages and reconstructing their genealogical relationships.

One of the exciting things happening in linguistics today is that linguists are increasingly connecting with the field of evolutionary biology. Evolutionary biologists ask very similar questions about species to those me and my colleagues want to ask about languages: why they are distributed in a certain way, for example, or looking for explanations for differences and similarities between them.

These similarities in outlook allow us to apply all the modern tools of computational evolutionary biology to linguistic questions, giving us new insights into fundamental questions about the processes of language change, and through that into the nature of language in general.

Read more: The Conversation

When Did Colonial America Gain Linguistic Independence?

When did Americans start sounding funny to English ears? By the time the Declaration of Independence was signed in 1776, carefully composed in the richly-worded language of the day, did colonial Americans—who after all were British before they decided to switch to become American—really sound all that different from their counterparts in the mother country?

If you believe historical reenactments in film and television, no. Many people assume colonists spoke with the same accents their families immigrated with, which were largely British ones. Of course, sociolinguistic studies regularly show that speakers of American English seem to have a gentle inferiority complex about their own different accents, often rating British accents as higher in social status, for instance. So anglophone language attitudes being what they are, the accents of historical figures often end up British-inflected anyway, which, for audiences on both sides of the pond, seems to add an air of artistic verisimilitude to what might otherwise be a bald and unconvincing narrative. This might ultimately be a stretch for Romans and Nazis and evil villains. But is it really out of left field for the principal historical figures of colonial British America, on-screen or off-, to have sounded more or less British, with its tumbling mess of quirky regional dialects, a Scot here, a Cockney there, as well as the ever present Queen’s English?

Read more: JSTOR

After Years Of Restraint, A Linguist Says ‘Yes!’ To The Exclamation Point

The only literary work about punctuation I’m aware of is an odd early story by Anton Chekhov called “The Exclamation Mark.” After getting into an argument with a colleague about punctuation, a school inspector named Yefim Perekladin asks his wife what an exclamation point is for. She tells him it signifies delight, indignation, joy and rage. He realizes that in 40 years of writing official reports, he has never had the need to express any of those emotions.

As Perekladin obsesses about the mark, it becomes an apparition that haunts his waking life, mocking him as an unfeeling machine. In desperation, he signs his name in a visitors book and puts three exclamation points after it. All of a sudden, Chekhov writes, “He felt delight and indignation, he was joyful and seethed with rage.”

Yefim Perekladin, c’est moi! At least, I used to be one of those people who use the exclamation point as sparingly as possible. We’ll grudgingly stick one in after an interjection or a sentence like “What a jerk!” but never to punch up an ordinary sentence in an essay or email. We say we’re saving them for special occasions, but they never seem to arise.

Read more: NPR

Elon Musk and linguists say that AI is forcing us to confront the limits of human language

In analytic philosophy, any meaning can be expressed in language. In his book Expression and Meaning (1979), UC Berkeley philosopher John Searle calls this idea “the principle of expressibility, the principle that whatever can be meant can be said”. Moreover, in the Tractatus Logico-Philosophicus (1921), Ludwig Wittgenstein suggests that “the limits of my language mean the limits of my world”.

Outside the hermetically sealed field of analytic philosophy, the limits of natural language when it comes to meaning-making have long been recognized in both the arts and sciences. Psychology and linguistics acknowledge that language is not a perfect medium. It is generally accepted that much of our thought is non-verbal, and at least some of it might be inexpressible in language. Notably, language often cannot express the concrete experiences engendered by contemporary art and fails to formulate the kind of abstract thought characteristic of much modern science. Language is not a flawless vehicle for conveying thought and feelings.

In the field of artificial intelligence, technology can be incomprehensible even to experts. In the essay “Is Artificial Intelligence Permanently Inscrutable?” Princeton neuroscientist Aaron Bornstein discusses this problem with regard to artificial neural networks (computational models): “Nobody knows quite how they work. And that means no one can predict when they might fail.” This could harm people if, for example, doctors relied on this technology to assess whether patients might develop complications.

Bornstein says organizations sometimes choose less efficient but more transparent tools for data analysis and “even governments are starting to show concern about the increasing influence of inscrutable neural-network oracles.” He suggests that “the requirement for interpretability can be seen as another set of constraints, preventing a model from a ‘pure’ solution that pays attention only to the input and output data it is given, and potentially reducing accuracy.” The mind is a limitation for artificial intelligence: “Interpretability could keep such models from reaching their full potential.” Since the work of such technology cannot be fully understood, it is virtually impossible to explain in language.

Read more: Quartz

Language alters our experience of time

It turns out, Hollywood got it half right. In the film Arrival, Amy Adams plays linguist Louise Banks who is trying to decipher an alien language. She discovers the way the aliens talk about time gives them the power to see into the future – so as Banks learns their language, she also begins to see through time. As one character in the movie says: “Learning a foreign language rewires your brain.”

My new study – which I worked on with linguist Emanuel Bylund – shows that bilinguals do indeed think about time differently, depending on the language context in which they are estimating the duration of events. But unlike Hollywood, bilinguals sadly can’t see into the future. However, this study does show that learning a new way to talk about time really does rewire the brain. Our findings are the first psycho-physical evidence of cognitive flexibility in bilinguals.

We have known for some time that bilinguals go back and forth between their languages rapidly and often unconsciously – a phenomenon called code-switching. But different languages also embody different worldviews and different ways of organising the world around us. The way that bilinguals handle these different ways of thinking has long been a mystery to language researchers.

Read more: The Conversation

Understanding Languages with Physics and Math

A husband and wife scientist duo from Poland has developed a computer model that simulates how vocabulary exchanges occur between settlers and nomads. According to their results, published in the journal Physical Review E, the nomadic groups are more likely to adopt words from settlers than the other way around.

The new model provides a tool that can help sociolinguists understand how migration and intercultural interactions can influence the evolution of a language. While linguists tend to express careful skepticism about the significance of this and similar predictive computational models, these studies contribute to the growing interest of studying social behaviors with computational methods.

“For this study, we developed a model using a method that has been around for 20 years or so,” said Adam Lipowski, a physicist from Adam Mickiewicz University in Poznań, Poland. He is referring to the Naming Game, which simulates the exchange of information between individuals during face-to-face interactions.

The model divides individuals into two groups, each with their own language. During the simulation, the group that represents the settlers stays put, while individuals from the nomadic group move around. Over time, the model shows that the nomads are more likely to pick up new words than the settlers, even when everything else, such as the vocabulary size or the population size of the groups, were kept equal. This result came as a bit of a surprise to Lipowski and his co-author, Dorota Lipowska.

Read more: Inside Science