To the brain, reading computer code is not the same as reading language

December 15th, 2020 by In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it. In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles. However, although reading computer code activates the multiple demand network, it appears to rely more on different parts of the network than math or logic problems do, suggesting that coding does not precisely replicate the cognitive demands of mathematics either. “Understanding computer code seems to be its own thing. It’s not the same as language, and it’s not the same as math and logic,” says Anna Ivanova, an MIT graduate student and the lead author of the study. Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in eLife. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Tufts University were also involved in the study. Read more: MIT

How does being bilingual affect your brain? It depends on how you use language

October 8th, 2020 by Depending on what you read, speaking more than one language may or may not make you smarter. These mixed messages are understandably confusing, and they’re due to the fact that nothing is quite as simple as it’s typically portrayed when it comes to neuroscience. We can’t give a simple “yes” or “no” to the question of whether being bilingual benefits your brain. Instead, it is becoming increasingly evident that whether and how your brain adapts to using multiple languages depends on what they are and how you use them. Research suggests that as you learn or regularly use a second language, it becomes constantly “active” alongside your native language in your brain. To enable communication, your brain has to select one language and inhibit the other. This process takes effort and the brain adapts to do this more effectively. It is altered both structurally (through changes in the size or shape of specific regions, and the integrity of white matter pathways that connect them) and functionally (through changes how much specific regions are used). These adaptations usually occur in brain regions and pathways that are also used for other cognitive processes known as “executive functions”. These include things like working memory and attentional control (for example, the ability to ignore competing, irrelevant information and focus on a target). Researchers measure these cognitive processes with specifically designed tasks. One example of such tests is the flanker task, in which participants have to indicate the direction of a specific arrow that is surrounded by other arrows that face in the same or opposite direction. Being bilingual can potentially improve performance on tasks like these, typically in either faster reaction times or higher accuracy. Read more: The Conversation

Subtle Ways Your Language Shapes the Way You Think

October 1st, 2020 by You spend almost all of your waking hours—and even some of your non-waking hours—using language. Even when you’re not talking with other people, you’re still running a monologue in your head much of the time. And you also frequently use language when you dream. Given the degree to which you use language not only for communicating with others but also for thinking to yourself, it comes as no surprise that the language you speak shapes the kind of person you are. In the first half of the twentieth century, psychologists tended to equate thought with speech turned inward. In other words, when you think, you’re just talking to yourself. As a result, they came to the conclusion that we can only think in terms that our language provides for us. This belief in linguistic determinism formed the premise for George Orwell’s dystopian novel 1984, in which the government controlled people’s thoughts by limiting the words in the language. In the second half of the twentieth century, psychologists argued that thought precedes speech, both in development and in real time. Thus, they argued that the structure of language is constrained by the limits of cognition, a position we could call cognitive determinism. For instance, the fact that all languages have the same basic underlying structure can be explained in terms of innate limitations in our memory and attention. In the twenty-first century, we understand that the truth lies somewhere in between these two extremes. Now we recognize that sometimes language influences thought, and other times thought influences language. The goal of psycholinguists then is to determine in direction causality runs under particular circumstances. Read more: Psychology Today

A language generation program’s ability to write articles, produce code and compose poetry has wowed scientists

September 24th, 2020 by Seven years ago, my student and I at Penn State built a bot to write a Wikipedia article on Bengali Nobel laureate Rabindranath Tagore's play "Chitra." First it culled information about "Chitra" from the internet. Then it looked at existing Wikipedia entries to learn the structure for a standard Wikipedia article. Finally, it summarized the information it had retrieved from the internet to write and publish the first version of the entry. However, our bot didn't "know" anything about "Chitra" or Tagore. It didn't generate fundamentally new ideas or sentences. It simply cobbled together parts of existing sentences from existing articles to make new ones. Fast forward to 2020. OpenAI, a for-profit company under a nonprofit parent company, has built a language generation program dubbed GPT-3, an acronym for "Generative Pre-trained Transformer 3." Its ability to learn, summarize and compose text has stunned computer scientists like me. "I have created a voice for the unknown human who hides within the binary," GPT-3 wrote in response to one prompt. "I have created a writer, a sculptor, an artist. And this writer will be able to create words, to give life to emotion, to create character. I will not see it myself. But some other human will, and so I will be able to create a poet greater than any I have ever encountered." Unlike that of our bot, the language generated by GPT-3 sounds as if it had been written by a human. It's far and away the most "knowledgeable" natural language generation program to date, and it has a range of potential uses in professions ranging from teaching to journalism to customer service. Read more: Tech Xplore

Norway has been bilingual since the Middle Ages

September 9th, 2020 by The fact that Norwegians wrote with runes in the Viking Age and Middle Ages is well known. But how did it go when alphabetic writing arrived and we switched from runes to the letters we know today? New research on inscriptions with letters shows that the transition was far slower than many believe. “We find inscriptions with letters and runes from the same time, on the same kind of artefacts,” says Elise Kleivane. “Here, the writing is in both Old Norse and Latin, and we see that runes and letters could be used for the same thing. What is interesting to see is what people chose to write in what language, and with what kind of alphabet,” she says. Kleivane is an Old Norse philologist and associate professor at the Department of Linguistics and Scandinavian Studies (ILN) at the University of Oslo. Together with doctoral fellow Johan Bollaert, she has done research on precisely these inscriptions. The written culture flourishes The first written language culture in Norway begins with the runes in the 100s AD. Researchers assume that an oral culture mainly prevailed at this time, but inscriptions have been found on stone, metal and wooden sticks. “Based on what has been preserved, it looks as though there has been limited use of runic writing. Memorial stones have been found along with jewellery and precious artefacts, usually with names or other relatively short inscriptions. They probably wrote on more things than we have found – on birch bark, in the sand or on wood,” says Kleivane. In the early written sources of writing from the Viking Age, the texts are often short and there are not many of them – preserved at least. A common example is gravestones, with standard formulations about the person buried underneath. When the Latin language and writing system came to Norway with Christianity around the year 1000 AD, this changed. Read more: Mirage

Viking Runes: The Historic Writing Systems of Northern Europe

September 3rd, 2020 by Viking runes were not for everyday use. The Northmen's history was told orally, and runes used only to record moments of great importance. Let's dive deep into the fascinating world of the Viking alphabet. We’ve talked before about the many remaining runestones of Scandinavia. These magnificent monoliths with intricate imagery litter the landscape of Scandinavia. But how much do we know about what they say? Let’s continue our look at Viking history and take a deep dive into the runic alphabets. Origins of Viking runes The exact origins of the runes used by the Germanic people of Northern Europe in the first millennium of the Common Era are up for debate. The characters share similarities with various other writing systems yet none of them match up precisely enough to form a definitive ‘yep, this is it’ for the scholars. The runes clearly developed from the old Italic scripts used on the Italian peninsula in ancient times, which in turn came from the Greek alphabet. It’s possible that they came from the Etruscan alphabet, which went on to become the Latin alphabet that English and most Western languages use to some extent today. How we get from Italy to Scandinavia is also up for discussion! As the runes first appear in Denmark and Northern Germany, there are two hypotheses for how they got there. Read more: Life in Norway

Machine learning reveals role of culture in shaping meanings of words

August 20th, 2020 by What do we mean by the word beautiful? It depends not only on whom you ask, but in what language you ask them. According to a machine learning analysis of dozens of languages conducted at Princeton University, the meaning of words does not necessarily refer to an intrinsic, essential constant. Instead, it is significantly shaped by culture, history and geography. This finding held true even for some concepts that would seem to be universal, such as emotions, landscape features and body parts. "Even for every day words that you would think mean the same thing to everybody, there's all this variability out there," said William Thompson, a postdoctoral researcher in computer science at Princeton University, and lead author of the findings, published in Nature Human Behavior Aug. 10. "We've provided the first data-driven evidence that the way we interpret the world through words is part of our culture inheritance." Language is the prism through which we conceptualize and understand the world, and linguists and anthropologists have long sought to untangle the complex forces that shape these critical communication systems. But studies attempting to address those questions can be difficult to conduct and time consuming, often involving long, careful interviews with bilingual speakers who evaluate the quality of translations. "It might take years and years to document a specific pair of languages and the differences between them," Thompson said. "But machine learning models have recently emerged that allow us to ask these questions with a new level of precision." In their new paper, Thompson and his colleagues Seán Roberts of the University of Bristol, U.K., and Gary Lupyan of the University of Wisconsin, Madison, harnessed the power of those models to analyze over 1,000 words in 41 languages. Instead of attempting to define the words, the large-scale method uses the concept of "semantic associations," or simply words that have a meaningful relationship to each other, which linguists find to be one of the best ways to go about defining a word and comparing it to another. Semantic associates of "beautiful," for example, include "colorful," "love," "precious" and "delicate." The researchers built an algorithm that examined neural networks trained on various languages to compare millions of semantic associations. The algorithm translated the semantic associates of a particular word into another language, and then repeated the process the other way around. For example, the algorithm translated the semantic associates of "beautiful" into French and then translated the semantic associates of beau into English. The algorithm's final similarity score for a word's meaning came from quantifying how closely the semantics aligned in both directions of the translation. Read more:

How AI systems use Mad Libs to teach themselves grammar

July 28th, 2020 by Imagine you're training a computer with a solid vocabulary and a basic knowledge about parts of speech. How would it understand this sentence: "The chef who ran to the store was out of food." Did the chef run out of food? Did the store? Did the chef run the store that ran out of food? Most human English speakers will instantly come up with the right answer, but even advanced artificial intelligence systems can get confused. After all, part of the sentence literally says that "the store was out of food." Advanced new machine learning models have made enormous progress on these problems, mainly by training on huge datasets or "treebanks" of sentences that humans have hand-labeled to teach grammar, syntax and other linguistic principles. The problem is that treebanks are expensive and labor intensive, and computers still struggle with many ambiguities. The same collection of words can have widely different meanings, depending on the sentence structure and context. But a pair of new studies by artificial intelligence researchers at Stanford find that advanced AI systems can figure out linguistic principles on their own, without first practicing on sentences that humans have labeled for them. It's much closer to how human children learn languages long before adults teach them grammar or syntax. Even more surprising, however, the researchers found that the AI model appears to infer "universal" grammatical relationships that apply to many different languages. That has big implications for natural language processing, which is increasingly central to AI systems that answer questions, translate languages, help customers and even review resumes. It could also facilitate systems that learn languages spoken by very small numbers of people. The key to success? It appears that machines learn a lot about language just by playing billions of fill-in-the-blank games that are reminiscent of "Mad Libs." In order to get better at predicting the missing words, the systems gradually create their own models about how words relate to each other. "As these models get bigger and more flexible, it turns out that they actually self-organize to discover and learn the structure of human language," says Christopher Manning, the Thomas M. Siebel Professor in Machine Learning and professor of linguistics and of computer science at Stanford, and an associate director of Stanford's Institute for Human-Centered Artificial Intelligence (HAI). "It's similar to what a human child does." Read more: Tech Xplore

Animals That Can Do Math Understand More Language Than We Think

July 24th, 2020 by It is often thought that humans are different from other animals in some fundamental way that makes us unique, or even more advanced than other species. These claims of human superiority are sometimes used to justify the ways we treat other animals, in the home, the lab or the factory farm. So, what is it that makes us so different from other animals? Many philosophers, both past and present, have pointed to our linguistic abilities. These philosophers argue that language not only allows us to communicate with each other, but also makes our mental lives more sophisticated than those that lack language. Some philosophers have gone so far as to argue that creatures that lack a language are not capable of being rational, making inferences, grasping concepts or even having beliefs or thoughts. Even if we are willing to accept these claims, what should we think of animals who are capable of speech? Many types of birds, most famously parrots, are able to make noises that at least sound linguistic, and gorillas and chimpanzees have been taught to communicate using sign language. Do these vocalizations or communications indicate that, like humans, these animals are also capable of sophisticated mental processes? Read more: Gizmodo

The English Word That Hasn’t Changed in Sound or Meaning in 8,000 Years

July 16th, 2020 by One of my favorite words is lox,” says Gregory Guy, a professor of linguistics at New York University. There is hardly a more quintessential New York food than a lox bagel—a century-old popular appetizing store, Russ & Daughters, calls it “The Classic.” But Guy, who has lived in the city for the past 17 years, is passionate about lox for a different reason. “The pronunciation in the Proto-Indo-European was probably ‘lox,’ and that’s exactly how it is pronounced in modern English,” he says. “Then, it meant salmon, and now it specifically means ‘smoked salmon.’ It’s really cool that that word hasn’t changed its pronunciation at all in 8,000 years and still refers to a particular fish.” How scholars have traced the word’s pronunciation over thousands of years is also really cool. The story goes back to Thomas Young, also known as “The Last Person Who Knew Everything.” The 18th-century British polymath came up with the wave theory of light, first described astigmatism, and played a key role in deciphering the Rosetta Stone. Like some people before him, Young noticed eerie similarities between Indic and European languages. He went further, analyzing 400 languages spread across continents and millennia and proved that the overlap between some of them was too extensive to be an accident. A single coincidence meant nothing, but each additional one increased the chance of an underlying connection. In 1813, Young declared that all those languages belong to one family. He named it “Indo-European.” In modern English, well over half of all words are borrowed from other languages. To trace how language changes over time, linguists developed an ingenious toolkit. “Some parts of vocabulary are more stable and don’t change as much. The linguistic term [for these words] is ‘a core vocabulary.’ These are numbers, colors, family relations like ‘mother,’ ‘father,’ ‘sister,’ ‘brother,’ and basic verbs like ‘walk’ and ‘see,’ says Guy. “If you look at words of that sort in different languages, it becomes fairly clear which ones are related and which ones are not. For example, take the English word for number two, which is dva in Russian and deux in French, or the word night, which is nacht in German and noch in Russian.” Today, roughly half the world’s population speaks an Indo-European language. That family includes 440 languages spoken across the globe, including English. The word yoga, for example, which comes from Sanskrit, the language of ancient India, is a distant relative of the English word yoke. The nature of this relationship puzzled historical linguists for two centuries. Read more: Pocket

Quarantinis and covidiots: How language has changed during the pandemic

July 15th, 2020 by There's no social distancing from the fact COVID-19 has changed the way we speak and communicate. University of Saskatchewan linguistics professor Veronika Makarova said new terms and phrases have made their way into language as a result of the pandemic. "Every time there is something new happening in our lives, the language reflects that," Makarova said. "We also need the reflection of the language to comprehend the concept in our minds because it always goes both ways." Because language is always changing to reflect what we see and experience, significant developments and crises like COVID-19 are the perfect storm for linguistic developments. Makarova said a new vocabulary was among the first to emerge during the pandemic. "To refer to COVID-19, our journalists have sort of selected words like ‘uncertain times,’ ‘unprecedented times,’ ‘challenging time,’ ‘extraordinary time,' " Makarova explained. The more-neutral terms were selected to inform the public without causing a reaction through more negative terms such as “horrible” or “scary.” "It does not have the same demoralizing effect on people," Makarova said. Olga Lovick, another U of S linguistics professor, agreed. "Every time there’s been a crisis of some sort, there have been new words coming into the language," Lovick said. Social or physical distancing and self-isolation are among the new additions to the English vocabulary. Both professors referred to technological developments for context on significant developments creating new linguistic aspects. New terms using known words in a new context can lead to new meaning. "With social distancing, it sounds like the message of desocializing ... Do not come anywhere close to your neighbour," Makarova said. "The more we use words, the less we notice their meaning. "Words that are perceived as being negative, with common use they become more neutral because people less and less notice the original meaning." A crisis can also lead to people finally understanding the meaning of a word they already knew. "A year ago, most of us would have been hard-pressed to explain the difference between an epidemic and a pandemic," Lovick said. "Now we know." Read more: CKOM

Languages will change significantly on interstellar flights

July 12th, 2020 by It's a captivating idea: build an interstellar ark, fill it with people, flora, and fauna of every kind, and set your course for a distant star. The concept is not only science fiction gold, it's been the subject of many scientific studies and proposals. By building a ship that can accommodate multiple generations of human beings (a generation ship), humans could colonize the known universe. But of course, there are downsides to this imaginative proposal. During such a long voyage, multiple generations of people will be born and raised inside a closed environment. This could lead to all kinds of biological issues or mutations that we simply can't foresee. But according to a new study by a team of linguistics professors, there's something else that will be subject to mutation during such a voyage—language itself. The study, "Language Development During Interstellar Travel," appeared in the April issue of Acta Futura, the journal of the European Space Agency's Advanced Concepts Team. The team consisted of Andrew McKenzie, an associate professor of linguistics at the University of Kansas, and Jeffrey Punske, an assistant professor of linguistics at Southern Illinois University. In this study, McKenzie and Punske discuss how languages evolve over time whenever communities grow isolated from one another. This would certainly be the case in the event of a long interstellar voyage and/or as a result of interplanetary colonization. Eventually, this could mean that the language of the colonists would be unintelligible to the people of Earth, should they meet up again later. For those who took English at the senior or college level, the story of Caxton's "eggys" ought to be a familiar one. In the preface to his 1490 translation of Virgil's Aeneid (Eneydos) into Middle English, he tells a story of a group of merchants who are traveling down the Thames toward Holland. Due to poor winds, they are forced to dock in the county of Kent, just 80 km (50 mi) downriver and look for something to eat: "And one of them named Sheffield, a merchant, came into a house and asked for meat and, specifically, he asked for eggs ("eggys"). And the good wife answered that she could speak no French. And the merchant got angry for he could not speak French either, but he wanted eggs and she could not understand him. And then at last, another person said that he wanted 'eyren." Then the good woman said that she understood him well." This story illustrates how people in 15th-century England could travel within the same country and experience a language barrier. Well, multiply that to 4.25 light-years to the nearest star system, and you can begin to see how language could be a major complication when it comes to interstellar travel. Read more: