It’s All Greek to You and Me, So What Is It to the Greeks?

It’s a curious thing when there is an idiom—structured roughly the same way and meaning essentially the same thing—that exists in a large number of languages. It’s even more curious when that idiom, having emerged in dozens of different languages, is actually … about language. That’s the case with “It’s Greek to me.”

In a wide-ranging number of languages, major and minor, from all different branches of the language family tree, there is some version of “It’s Greek to me.” These idioms all seek to describe one person’s failure to understand what the other is trying to say, but in a particular, dismissive way. It’s not just, “Sorry, I can’t understand you.” It’s saying, “The way you’re speaking right now is incomprehensible.” And it specifically compares that incomprehensibility to a particular language, a language agreed upon in that culture to be particularly impenetrable.

Sometimes that original cultural peg has been lost. In English, the phrase doesn’t really indicate anything about the way modern English-speakers feel about the Greek language or Greece in general. It’s just an old, tired idiom. In fact, polls show that native English speakers don’t even think about Greek when asked to name the hardest language to learn. So where did the phrase come from, and why is its sentiment so universal?

As with far too many linguistic questions like this, there is no definitive answer. One theory ties it to medieval monks. In Western Europe at this time, the predominant written language was Latin, but much of the writing that survived from antiquity was in Greek. The theory holds that these monks, in transcribing and copying their texts, were not necessarily able to read Greek, and would write a phrase next to any Greek text they found: “Graecum est; non legitur.” Translated: “It is Greek; it cannot be read.”

This phrase seems to have been embedded in parts of Western Europe, and examples appear in plays starting in the 16th century. William Shakespeare, in his 1599 Julius Caesar, used it, and he is widely credited with bringing a long-latent phrase into the mainstream. Interestingly, Shakespeare’s version is a lot more literal than most of the uses of this idiom. In Julius Caesar, the Roman character Casca describes a speech made by Cicero, a scholar of Greek.* Casca, one of the conspirators who assassinates Caesar, does not speak Greek. So he says, “Those that understood him smiled at one another and shook their heads; but, for mine own part, it was Greek to me.”

Read more: Pocket

How do machines translate linguistically distant languages?

“Poisonous and evil rubbish”. “Pregnant woman over 70 lounge”. “Slip and fall carefully”. Such semantically and syntactically erroneous sentences were not taken from a practice sheet in an English language classroom, but extracted from machine translation (MT) software and published on street signs in East Asia.

Incorrect automated translations of the like trigger either raised eyebrows or giggles of ridicule, so often that meme pages have been established with the sole purpose of mocking humorous translation failures around the world (a culturally ignorant practice, but that’s an argument for another day). Beyond the superficial laughter, however, we should still concern ourselves with the issue of machine translation. In an increasingly globalised era, machine translations are destined to play a pivotal role in cross-cultural communication for generations to come. Today, while machines generally produce satisfactory translations for typologically-related language pairs (e.g. Norwegian to Swedish), problems often emerge with linguistically distant language pairs (e.g. English to Japanese). So how are scientists working to improve the latter type?

To answer that, it is first necessary to understand how machine translation functions, as well as its evolutionary path. Initially coined as computer-assisted language processing, machine translation has taken on multiple forms over the decades, each adopting a different approach to processing input and producing output. For an English sentence as simple as “The women speak with the principal”, a traditional rule-based machine translation starts with an analysis of morphosyntax (i.e. word and sentence structure). It first recognises the subject-predicate (“the women” vs. “speak with the principal”) and other key grammatical information (particularity “the” and plurality of “women”). Afterwards, it processes the semantics of the input by interpreting what each individual word means in context (is “principal” here a noun as in “school headmaster”? Or an adjective meaning “primary”?) and finally translates the interpreted input into the target language.

Read more: Varsity

To Communicate With Apes, We Must Do It On Their Terms

On August 24, 1661, Samuel Pepys, an administrator in England’s navy and famous diarist, took a break from work to go see a “strange creature” that had just arrived on a ship from West Africa. Most likely, it was a chimpanzee—the first Pepys had ever seen. As he wrote in his diary, the “great baboon” was so human-like that he wondered if it were not the offspring of a man and a “she-baboon.”

“I do believe that it already understands much English,” he continued, “and I am of the mind it might be taught to speak or make signs.”

Humans and apes share nearly 99% of the same DNA, but language is one thing that seems to irreconcilably differentiate our species. Is that by necessity of nature, though, or simply a question of nurture?

“It could be that there’s something biologically different in our genome, something that’s changed since we split from apes, and that’s language,” says Catherine Hobaiter, a primatologist at the University of St. Andrews in Scotland. “But another possibility is that they might have the cognitive capacity for language, but they’re not able to physically express it like we do.”

In the centuries since Pepys’ speculations, scientists and the public alike have only become more enamored with the idea of breaking through communication barriers separating our species. “It’s every scientist’s dream,” Hobaiter says. “Instead of having to do years of tests, we could just sit down and have a chat.”

This not only would allow us to find out what our closest relatives are thinking, but also to potentially learn something about our own evolution and what it means to be human, says Jared Taglialatela, an associate professor at Kennesaw State University and the director of research at the Ape Cognition and Conservation Initiative. “We shared a common ancestor just 5.5 million years ago with both chimpanzees and bonobos,” he says. “That’s useful for making comparisons and answering questions about human origins.”

Scientists have been trying to teach chimps to speak for decades, with efforts ranging from misguided to tantalizingly promising. Now, however, they are coming to realize that we’ve likely been going about it in the wrong way. Rather than force apes to learn our language, we should be learning theirs.

Read more: NOVA Next

The Brain Has Its Own “Autofill” Function for Speech

The world is an unpredictable place. But the brain has evolved a way to cope with the everyday uncertainties it encounters—it doesn’t present us with many of them, but instead resolves them as a realistic model of the world. The body’s central controller predicts every contingency, using its stored database of past experiences, to minimize the element of surprise. Take vision, for example: We rarely see objects in their entirety but our brains fill in the gaps to make a best guess at what we are seeing—and these predictions are usually an accurate reflection of reality.

The same is true of hearing, and neuroscientists have now identified a predictive textlike brain mechanism that helps us to anticipate what is coming next when we hear someone speaking. The findings, published this week in PLoS Biology, advance our understanding of how the brain processes speech. They also provide clues about how language evolved, and could even lead to new ways of diagnosing a variety of neurological conditions more accurately.

The new study builds on earlier findings that monkeys and human infants can implicitly learn to recognize artificial grammar, or the rules by which sounds in a made-up language are related to one another. Neuroscientist Yukiko Kikuchi of Newcastle University in England and her colleagues played sequences of nonsense speech sounds to macaques and humans. Consistent with the earlier findings, Kikuchi and her team found both species quickly learned the rules of the language’s artificial grammar. After this initial learning period the researchers played more sound sequences—some of which violated the fabricated grammatical rules. They used microelectrodes to record responses from hundreds of individual neurons as well as from large populations of neurons that process sound information. In this way they were able to compare the responses with both types of sequences and determine the similarities between the two species’ reactions.

Read more: Scientific American

AI Systems Are Learning to Communicate With Humans

In the future, service robots equipped with artificial intelligence (AI) are bound to be a common sight. These bots will help people navigate crowded airports, serve meals, or even schedule meetings.

As these AI systems become more integrated into daily life, it is vital to find an efficient way to communicate with them. It is obviously more natural for a human to speak in plain language rather than a string of code. Further, as the relationship between humans and robots grows, it will be necessary to engage in conversations, rather than just give orders.

This human-robot interaction is what Manuela M. Veloso’s research is all about. Veloso, a professor at Carnegie Mellon University, has focused her research on CoBots, autonomous indoor mobile service robots which transport items, guide visitors to building locations, and traverse the halls and elevators. The CoBot robots have been successfully autonomously navigating for several years now, and have traveled more than 1,000km. These accomplishments have enabled the research team to pursue a new direction, focusing now on novel human-robot interaction.

“If you really want these autonomous robots to be in the presence of humans and interacting with humans, and being capable of benefiting humans, they need to be able to talk with humans” Veloso says.

Read more: Futurism

Different Written Languages Are Equally as Efficient at Conveying Meaning

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

Read more: Neuroscience News

Welcome To The Era Of Big Translation

Despite the increasing ability to reach foreign customers, the lack of quality translation methods is still the most challenging aspect of global expansion. Currently, even using the most advanced software services are an expensive, complicated, and inaccurate process. The result is that too often, businesses sacrifice millions of dollars in profit because marketing to global consumers is too complex to be worthwhile.

That’s why we need an era of “Big Translation,” to leverage our existing technological tools and scale up translation capabilities to a level that actually matches global communication needs. Big Translation, a large-scale translation efforts by people speaking two or more languages, would allow businesses, individuals, and even tweeters to get what they want translated easily and affordably.

The idea is banking on the fact that the world certainly isn’t lacking in translation talent. Nearly half the world speaks two or more languages—that’s 3.65 billion people with the potential to contribute to translation. Presently, a number of innovators are trying to tap into our world’s language talent. Chief among them is the groundbreaking idea that bypasses traditional translation tools and moves to a more accessible mobile app platform.

Read more: Fast Company

Understanding speech not just a matter of believing one’s ears

Even if we just hear part of what someone has said, when we are familiar with the context, we automatically add the missing information ourselves. Researchers from the Max Planck Institute for Empirical Aesthetics in Frankfurt and the Max Planck Institute for Cognitive and Brain Sciences in Leipzig have now succeeded in demonstrating how we do this.

Incomplete utterances are something we constantly encounter in everyday communication. Individual sounds and occasionally entire words fall victim to fast speech rates or imprecise articulation. In poetic language, omissions are used as a stylistic device or are the necessary outcome of the use of regular metre or rhyming syllables. In both cases, our comprehension of the spoken content is only slightly impaired or, in most cases, not affected at all.

The results of previous linguistic research suggest that language is particularly resilient to omissions when the linguistic information can be predicted both in terms of its content and phonetics. The most probable ending to the sentence: “The fisherman was in Norway and caught a sal…” is the word “salmon”. Accordingly, due to its predictability, this sentence-ending word should be able to accommodate the omission of the “m” and “on” sounds.

Read more: Medical Xpress

How a joke can help us unlock the mystery of meaning in language

What do you get if you cross a kangaroo with an elephant?

You’ll have to wait for the punchline, but you should already have shards of meaning tumbling about your mind. Now, jokes don’t have to be all that funny, of course, but if they are to work at all then they must construct something beyond the simple words deployed.

Language is the tissue that connects us in our daily social lives. We use it to gossip, to get a job, and give someone the sack. We use it to seduce, quarrel, propose marriage, get divorced and yes, tell the odd gag. In the absence of telepathy, it lets us interact with our nearest and dearest, and in our virtual web of digital communication, with hundreds of people we may never have met.

But while we now know an awful lot about the detail of the grammatical systems of the world’s 7,000 or so languages, scientific progress on the mysterious elixir of communication – meaning – has been a much tougher nut to crack.

Read more: Phys.org

How Dictation Software is Changing the Way We Communicate

There’s often a noticeable difference between the way we write and the way we speak. Writing allows us to think before we type, and that extra step between a thought and its expression often leads to wording that’s more precise—and more formal. But dictation software may be changing that.

According to WIRED, as dictation software on smartphones and computers becomes more sophisticated, more people are using it to draft texts and emails. And although dictation is viewed by many as just a simple time-saver, it may also have an impact on the way we communicate.

A small study in 2003 found that dictation software makes writing more casual. For instance, people are more likely to drop titles like “Mr.” and “Ms.” They may also use less complex language in general, in part because they fear dictation software won’t be able to pick it up. Designer Natalie Roth told WIRED, “I simplify what I’m saying so the computer will understand it. It’s the way I speak to someone when I know that their English is a bit rusty.”

Read more: Mental Floss

Do Animals Have Their Own Language And Can They Talk To Each Other?

Can animals understand each other? And if they do, do they communicate through language? Is their language sophisticated enough to combine sounds into meaning in some grammatical sense? And can they talk about things that aren’t right in front of them? A lot of them can do some of those things!

Read more: Gizmodo Australia

Sperm Whales’ Language Reveals Hints of Culture

New ways to grab dinner, the trick to using a tool, and learning the local dialect. These are behaviors that animals pick up from each other. Killer whales, chimpanzees, and birds seem to have a cultural component to their lives. Now a new study suggests that sperm whales should be added to that list.

The ocean around the Galápagos Islands hosts thousands of female sperm whales and their calves that have organized into clans with their own dialects. (Mature males congregate in colder waters near the poles.) How these clans form has been something of a mystery until now.

A study published Tuesday in the journal Nature Communications suggests that culture—behaviors shared by group members—keeps these sperm whale clans together. Specifically, these deep-diving whales have a distinct series of clicks called codas they use to communicate during social interactions.

Read more: National Geographic