Magic: The Gathering cards have a secret language that only a few can translate

As complicated as Magic: The Gathering is — with all its various and ever-changing keywords, strategies, products, storylines, even entire gameplay formats — would you believe it also has its own secret language?

Phyrexian was first introduced in 2010. It’s a language spoken in-fiction by a loathsome species of cybernetic monsters on the plane of Phyrexia. Publisher Wizards of the Coast has never completely explained how it works. So, for more than a dozen years now, a dedicated group of amateurs has been working to translate it, going card by card with only a few lines of new text occasionally delivered with new sets of cards. What they’ve discovered is a tongue that’s simultaneously alien and also very much a part of our world.

Fernando Franco Félix, a science advisor for PBS’ Space Time, is perhaps the foremost expert in Phyrexian outside of Wizards. A polyglot — that is, a master of multiple languages, in this case including English, Spanish, and Esperanto — he’s been fascinated with Phyrexian for years now, and maintains a small but dedicated following on YouTube.

“I’ve always liked languages,” Félix told Polygon from his home in Aguascalientes, Mexico. “What I always say is that a language is like an art gallery, and every aspect of the language is like an art piece. I see languages as the greatest collaborative work of art in the history of humanity. You have millions of people and, without even realizing it, they are creating this system, which is beautiful.”

Of course, Phyrexian wasn’t created over thousands of years across multiple cultures. It’s a constructed language, also called a conlang. That makes it similar to the languages found in Tolkien’s The Lord of the Rings books, or modern conlangs like Star Trek’s Klingon and Game of Thrones’ Dothraki and Valyrian. Of course, you can easily find documents online that will teach you how to speak like an elf or a Klingon. But not so with Phyrexian.

Read more: Polygon

The life and death of the Spanish language in the Philippines

I remember that a Latin American scholar — I do not know from which country — angrily claimed at an academic congress that because of the Spanish conquest, many indigenous languages went into extinction. That kept me thinking a lot about the strong connection between language and identity and why the scholar — using plain Spanish — was so angry about the issue. Maybe he thought his identity was blurred given his incapacity to speak Quechua, Aymara, Guarani or any other language. I answered that if he really felt that the loss was so big, he was still on time to learn the indigenous language of his birthplace and then give it to his children. But clearly he was not willing to do that: depriving his children from connecting with a community of more than 500 million speakers would undoubtedly affect their chances to prosper.

The Philippines has the opposite situation. After 333 years of Spanish presence, the language is almost totally gone. When I first asked some Filipinos why that happened, I was told “Spaniards did not want us to learn it.” The answer did not satisfy me: that would be quite unuseful even from the point of view of the colonizers. Other people gave me a more elaborate answer: “The only Spanish figure in many provinces was the Spanish friar and he did not want the people to learn so he could keep power being the middle person between the government and the natives.” That sounded more logical, but it actually ignores one very important factor: how languages are learned and spread.

Read more: Manila Times

Lost in translation: is research into species being missed because of a language barrier?

Valeria Ramírez Castañeda, a Colombian biologist, spends her time in the Amazon studying how snakes eat poisonous frogs without getting ill. Although her findings come in many shapes and sizes, in her years as a researcher, she and her colleagues have struggled to get their biological discoveries out to the wider scientific community. With Spanish as her mother tongue, her research had to be translated into English to be published. That wasn’t always possible because of budget or time constraints –and it means that some of her findings were never published.

“It’s not that I’m a bad scientist,” she says. “It’s just because of the language.”

Ramírez Castañeda is not alone. There is a plethora of research in non-English-language papers that gets lost in translation, or is never translated, creating a gap in the global community’s scientific knowledge. As the amount of scientific research grows, so does the gap. This is especially true for conservation and biodiversity. Research about native traditions and knowledge tied to biodiversity is often conducted in the domestic non-colonial language and isn’t translated.

study published in the journal Plos Biology found that paying more attention to non-English language research could expand the geographical coverage of biodiversity scientific evidence by 12% to 25% and the number of species covered by 5% to 32%. There is research on nine amphibian species, 217 bird species and 64 mammal species not covered in English-language studies. “We are essentially not using scientific evidence published in non-English-languages at the international level, but if we could make a better use of [it], we might be able to fill the existing gaps in the variability of current scientific evidence,” says Tatsuya Amano, a Japanese biodiversity researcher at the University of Queensland and the paper’s lead researcher.

Read more: The Guardian

Five diseases attack language areas in brain

There are five different diseases that attack the language areas in the left hemisphere of the brain that slowly cause progressive impairments of language known as primary progressive aphasia (PPA), reports a new Northwestern Medicine study.

“We’ve discovered each of these diseases hits a different part of the language network,” said lead author Dr. M. Marsel Mesulam, director of Northwestern’s Mesulam Center for Cognitive Neurology and Alzheimer’s Disease. “In some cases, the disease hits the area responsible for grammar, in others the area responsible for word comprehension. Each disease progresses at a different rate and has different implications for intervention.”

This study is based on the largest set of PPA autopsies—118 cases—ever assembled.

It will be published April 20 in the journal Brain.

“The patients had been followed for more than 25 years, so this is the most extensive study to date on life expectancy, type of language impairment and relationship of disease to details of language impairment,” said Mesulam, also chief of behavioral neurology at Northwestern University Feinberg School of Medicine.

Patients with PPA were prospectively enrolled in a longitudinal study that included language testing and imaging of brain structure and brain function. The study included consent to brain donation at death.

An estimated one in 100,000 people have PPA, Mesulam said.

Read more: Medical Xpress

AI Used to Fill in Missing Words in Ancient Writings

Researchers have developed an artificial intelligence (AI) system to help fill in missing words in ancient writings.

The system is designed to help historians restore the writings and identify when and where they were written.

Many ancient populations used writings, also known as inscriptions, to document different parts of their lives. The inscriptions have been found on materials such as rock, ceramic and metal. The writings often contained valuable information about how ancient people lived and how they structured their societies.

But in many cases, the objects containing such inscriptions have been damaged over the centuries. This left major parts of the inscriptions missing and difficult to identify and understand.

In addition, many of the inscribed objects were moved from areas where they were first created. This makes it difficult for scientists to discover when and where the writings were made.

The new AI-based method serves as a technological tool to help researchers repair missing inscriptions and estimate the true origins of the records.

The researchers, led by Alphabet’s AI company DeepMind, call their tool Ithaca. In a statement, the researchers said the system is “the first deep neural network that can restore the missing text of damaged inscriptions.” A neural network is a machine learning computer system built to act like the human brain.

Read more: VOA

Do Birds Have Language?

In our quest to find what makes humans unique, we often compare ourselves with our closest relatives: the great apes. But when it comes to understanding the quintessentially human capacity for language, scientists are finding that the most tantalizing clues lay farther afield.

Human language is made possible by an impressive aptitude for vocal learning. Infants hear sounds and words, form memories of them, and later try to produce those sounds, improving as they grow up. Most animals cannot learn to imitate sounds at all. Though nonhuman primates can learn how to use innate vocalizations in new ways, they don’t show a similar ability to learn new calls. Interestingly, a small number of more distant mammal species, including dolphins and bats, do have this capacity. But among the scattering of nonhuman vocal learners across the branches of the bush of life, the most impressive are birds — hands (wings?) down.

Parrots, songbirds and hummingbirds all learn new vocalizations. The calls and songs of some species in these groups appear to have even more in common with human language, such as conveying information intentionally and using simple forms of some of the elements of human language such as phonology, semantics and syntax. And the similarities run deeper, including analogous brain structures that are not shared by species without vocal learning.

These parallels have motivated an explosion of research in recent decades, says ethologist Julia Hyland Bruno of Columbia University, who studies social aspects of song learning in zebra finches. “Lots of people have made analogies between language and birdsong,” she says.

Hyland Bruno studies zebra finches because they are more social than most migratory birds — they like to travel in small bands that occasionally gather into larger groups. “I’m interested in how it is that they learn their culturally transmitted vocalizations in these groups,” says Hyland Bruno, coauthor of a paper in the 2021 Annual Review of Linguistics comparing birdsong learning and culture with human language.

Both birdsong and language are passed culturally to later generations through vocal learning. Geographically distant populations of the same bird species can make small tweaks to their songs over time, eventually resulting in a new dialect — a process similar in some ways to how humans develop different accents, dialects and languages.

With all these similarities in mind, it’s reasonable to ask if birds themselves have language. It may come down to how you define it.

“I wouldn’t say they have language in the way linguistic experts define it,” says neuroscientist Erich Jarvis of the Rockefeller University in New York City, and a coauthor of Hyland Bruno’s paper on birdsong and language. But for scientists like Jarvis who study the neurobiology of vocal communication in birds, “I would say they have a remnant or a rudimentary form of what we might call spoken language.

Read more: Smithsonian Magazine

An ancient language has defied translation for 100 years. Can AI crack the code?

Jiaming Luo grew up in mainland China thinking about neglected languages. When he was younger, he wondered why the different languages his mother and father spoke were often lumped together as Chinese “dialects.”

When he became a computer science doctoral student at MIT in 2015, his interest collided with his advisor’s long-standing fascination with ancient scripts. After all, what could be more neglected — or, to use Luo’s more academic term, “lower resourced” — than a long-lost language, left to us as enigmatic symbols on scattered fragments? “I think of these languages as mysteries,” Luo told Rest of World over Zoom. “That’s definitely what attracts me to them.”

In 2019, Luo made headlines when, working with a team of fellow MIT researchers, he brought his machine-learning expertise to the decipherment of ancient scripts. He and his colleagues developed an algorithm informed by patterns in how languages change over time. They fed their algorithm words in a lost language and in a known related language; its job was to align words from the lost language with their counterparts in the known language. Crucially, the same algorithm could be applied to different language pairs.

Luo and his colleagues tested their model on two ancient scripts that had already been deciphered: Ugaritic, which is related to Hebrew, and Linear B, which was first discovered among Bronze Age–era ruins on the Greek island of Crete. It took professional and amateur epigraphists — people who study ancient written matter — nearly six decades of mental wrangling to decode Linear B. Officially, 30-year-old British architect Michael Ventris is primarily credited with its decipherment, although the private efforts of classicist Alice Kober lay the groundwork for his breakthrough. Sitting night after night at her dining table in Brooklyn, New York, Kober compiled a makeshift database of Linear B symbols, comprising 180,000 paper slips filed in cigarette boxes, and used those to draw important conclusions about the nature of the script. She died in 1950, two years before Ventris cracked the code. Linear B is now recognized as the earliest form of Greek.  

Luo and his team wanted to see if their machine-learning model could get to the same answer, but faster. The algorithm yielded what was called “remarkable accuracy”: it was able to correctly translate 67.3% of Linear B’s words into their modern-day Greek equivalents. According to Luo, it took between two and three hours to run the algorithm once it had been built, cutting out the days or weeks — or months or years — that it might take to manually test out a theory by translating symbols one by one. The results for Ugaritic showed an improvement on previous attempts at automatic decipherment.

The work raised an intriguing proposition. Could machine learning assist researchers in their quests to crack other, as-yet undeciphered scripts — ones that have so far resisted all attempts at translation? What historical secrets might be unlocked as a result?

Read more: Rest of World

Researchers use linguistic analysis to uncover differences between psychedelic drug experiences

New research sheds light on the different types of subjective experiences produced by five different types of psychedelic substances. The study, published in Psychopharmacology, used computer algorithms to analyze thousands of anonymously published reports about the effects of psychedelic drugs.

A growing body of research indicates that psychedelic drugs like 3,4-methylenedioxymethamphetamine (MDMA) and psilocybin hold potential for the treatment of psychiatric conditions such as depression and post-traumatic stress disorder. While it is known that psychedelics induce profound changes in perception and consciousness, little research has quantified the different experiences associated with consuming these substances, especially in a naturalistic context.

“Though it has not been my main research focus previously, I was often fascinated by promising findings from studies examining psychedelic treatments for various mental disorders in patients who were unresponsive to standard treatments,” said study author Adrian Hase, a postdoctoral researcher at the University of Fribourg.

The psychiatric research group I am now part of focuses on stress and psychopathology, but also conducts basic research into the effects of psychedelic substances. We often talk about the topic and one day came up with this idea of analyzing online experience reports to compare various psychedelic substances.”

The researchers used software called Linguistic Inquiry and Word Count (LIWC) to analyze the content of 2,947 online reports from the Erowid experience vault. The sample included 971 reports about psilocybin-containing mushrooms, 671 reports about LSD, 312 reports about DMT, 163 reports about ketamine, 68 reports about ayahuasca, and 236 reports about antidepressant medication.

Read more: PsyPost

Rare African Script Hints At How Writing Evolved

In the early 19th Century a small group of Liberians invented a way to write the Vai language down. Instead of copying Roman or Arabic letters, as occurred with most languages first written down in recent times, they invented their own script. This created a rare example where linguists can trace the entire history of a written script. The evolution might settle a debate between theories about how Egyptian hieroglyphics turned into the letters we use today.

The ancestry of our modern letters can be traced back to the walls of ancient Egypt. Most famously, our letter A is thought to be descended from the ox’s head drawn by Egyptians via a Phoenician symbol and the Greek alpha. However, there are at least two theories on how this happens. “There’s a famous hypothesis that letters evolve from pictures to abstract signs. But there are also plenty of abstract letter-shapes in early writing,” said Dr Piers Kelly of the University of New England (Australia) in a statement.

Kelly thought it would be useful to compare the evolution of a script whose entire history is documented. In Current Anthropology, Kelly and co-authors trace the development of the script for the Vai language from the 1830s to today.

The Vai language is spoken by around 100,000 people in Liberia, along with a smaller number in neighboring Sierra Leone, and emigrant communities around the world. Until at least 1800 it appears to have been entirely oral, with no written script.

Read more: IFLScience

Dog brains can distinguish between languages

Dog brains can detect speech and show different activity patterns to familiar and unfamiliar languages, according to a new brain imaging study by researchers from the Department of Ethology, Eötvös Loránd University (Hungary). This is the first demonstration that a non-human brain can differentiate two languages. This work has been published in NeuroImage.

“Some years ago, I moved from Mexico to Hungary to join the Neuroethology of Communication Lab at the Department of Ethology, Eötvös Loránd University for my postdoctoral research. My dog, Kun-kun, came with me. Before, I had only talked to him in Spanish. So I was wondering whether Kun-kun noticed that people in Budapest spoke a different language, Hungarian,” says Laura V. Cuaya, first author of the study. “We know that people, even preverbal human infants, notice the difference. But maybe dogs do not bother. After all, we never draw our dogs’ attention to how a specific language sounds. We designed a brain imaging study to find this out.

“Kun-kun and 17 other dogs were trained to lay motionless in a brain scanner, where we played them speech excerpts of “The Little Prince’ in Spanish and Hungarian. All dogs had heard only one of the two languages from their owners, so this way, we could compare a highly familiar language to a completely unfamiliar one. We also played dogs scrambled versions of these excerpts, which sound completely unnatural, to test whether they detect the difference between speech and non-speech at all.”

When comparing brain responses to speech and non-speech, researchers found distinct activity patterns in dogs’ primary auditory cortex. This distinction was there independently from whether the stimuli originated from the familiar or the unfamiliar language. There was, however, no evidence that dog brains would have a neural preference for speech over non-speech.

“Dog brains, like human brains, can distinguish between speech and non-speech. But the mechanism underlying this speech detection ability may be different from speech sensitivity in humans: whereas human brains are specially tuned to speech, dog brains may simply detect the naturalness of the sound,” explains Raúl Hernández-Pérez, coauthor of the study.

In addition to speech detection, dog brains could also distinguish between Spanish and Hungarian

Read more:

Linguists are using Star Trek to save endangered languages

Most languages develop through centuries of use among groups of people. But some have a different origin: They are invented, from scratch, from one individual’s mind. Familiar examples include the international language Esperanto, the Klingon language from Star Trek, and the Elvish tongues from The Lord of the Rings.

The activity isn’t new — the earliest recorded invented language was by medieval nun Hildegard von Bingen — but the internet now allows much wider sharing of such languages among the small communities of people who speak and create them.

Christine Schreyer, a linguistic anthropologist at the University of British Columbia’s Okanagan campus in Kelowna, Canada, has studied invented languages and the people who speak them, a topic she writes about in the 2021 Annual Review of Anthropology. But Schreyer brings another skill to the table: She’s a language creator herself and has invented several languages for the movie industry: the Kryptonian language for “Man of Steel,” Eltarian for “Power Rangers,” Beama (Cro-Magnon) for “Alpha” and Atlantean for “Zack Snyder’s Justice League.”

Schreyer spoke with Knowable Magazine about her experience in this unusual world, and the practical lessons that it provides for people trying to revitalize endangered natural languages. This interview has been edited for length and clarity.

Read more: Inverse

Languages could go extinct at a rate of one per month this century

Denser road networks, higher levels of education and even climate change are just a few of the factors that could lead to the loss of more than 20 per cent of the world’s 7000 languages by the end of the century – equivalent to one language vanishing per month.

Based on a new model similar to those used for predicting species loss, a team of biologists, mathematicians and linguists led by Lindell Bromham at Australian National University in Canberra has determined that, without effective conservation, language loss will increase five-fold by 2100.

“This is a frightening statistic,” says Bromham, adding that her team’s estimates are “conservative”.

“Every time a language is lost, we lose so much,” she says. “We lose a rich source of cultural information; we lose a unique and beautiful expression of human creativity.”

Current language loss estimates vary considerably, with some predicting that up to 90 per cent of languages might no longer be spoken at the start of the next century.

Bromham, an evolutionary biologist, and her colleagues suspected that by borrowing modelling techniques from studies on biodiversity loss, they might be able to capture a more statistically sound view of language diversity loss.

Read more: New Scientist