Researchers use AI to unlock the secrets of ancient texts

The Abbey Library of St. Gall in Switzerland is home to approximately 160,000 volumes of literary and historical manuscripts dating back to the eighth century—all of which are written by hand, on parchment, in languages rarely spoken in modern times.

To preserve these historical accounts of humanity, such texts, numbering in the millions, have been kept safely stored away in libraries and monasteries all over the world. A significant portion of these collections are available to the general public through digital imagery, but experts say there is an extraordinary amount of material that has never been read—a treasure trove of insight into the world’s history hidden within.

Now, researchers at University of Notre Dame are developing an artificial neural network to read complex ancient handwriting based on human perception to improve capabilities of deep learning transcription.

“We’re dealing with historical documents written in styles that have long fallen out of fashion, going back many centuries, and in languages like Latin, which are rarely ever used anymore,” said Walter Scheirer, the Dennis O. Doughty Collegiate Associate Professor in the Department of Computer Science and Engineering at Notre Dame. “You can get beautiful photos of these materials, but what we’ve set out to do is automate transcription in a way that mimics the perception of the page through the eyes of the expert reader and provides a quick, searchable reading of the text.”

In research published in the Institute of Electrical and Electronics Engineers journal Transactions on Pattern Analysis and Machine Intelligence, Scheirer outlines how his team combined traditional methods of machine learning with visual psychophysics—a method of measuring the connections between physical stimuli and mental phenomena, such as the amount of time it takes for an expert reader to recognize a specific character, gauge the quality of the handwriting or identify the use of certain abbreviations.

Scheirer’s team studied digitized Latin manuscripts that were written by scribes in the Cloister of St. Gall in the ninth century. Readers entered their manual transcriptions into a specially designed software interface. The team then measured reaction times during transcription for an understanding of which words, characters and passages were easy or difficult. Scheirer explained that including that kind of data created a network more consistent with human behavior, reduced errors and provided a more accurate, more realistic reading of the text.

Read more: Tech Xplore

Why neural networks aren’t fit for natural language understanding

One of the dominant trends of artificial intelligence in the past decade has been to solve problems by creating ever-larger deep learning models. And nowhere is this trend more evident than in natural language processing, one of the most challenging areas of AI.

In recent years, researchers have shown that adding parameters to neural networks improves their performance on language tasks. However, the fundamental problem of understanding language—the iceberg lying under words and sentences—remains unsolved.

Linguistics for the Age of AI, a book by two scientists at Rensselaer Polytechnic Institute, discusses the shortcomings of current approaches to natural language understanding (NLU) and explores future pathways for developing intelligent agents that can interact with humans without causing frustration or making dumb mistakes.

Marjorie McShane and Sergei Nirenburg, the authors of Linguistics for the Age of AI, argue that AI systems must go beyond manipulating words. In their book, they make the case for NLU systems can understand the world, explain their knowledge to humans, and learn as they explore the world.

Consider the sentence, “I made her duck.” Did the subject of the sentence throw a rock and cause the other person to bend down, or did he cook duck meat for her?

Now consider this one: “Elaine poked the kid with the stick.” Did Elaine use a stick to poke the kid, or did she use her finger to poke the kid, who happened to be holding a stick?

Language is filled with ambiguities. We humans resolve these ambiguities using the context of language. We establish context using cues from the tone of the speaker, previous words and sentences, the general setting of the conversation, and basic knowledge about the world. When our intuitions and knowledge fail, we ask questions. For us, the process of determining context comes easily. But defining the same process in a computable way is easier said than done.

There are generally two ways to address this problem.

Read more: TechTalks

Robo-writers: the rise and risks of language-generating AI

In June 2020, a new and powerful artificial intelligence (AI) began dazzling technologists in Silicon Valley. Called GPT-3 and created by the research firm OpenAI in San Francisco, California, it was the latest and most powerful in a series of ‘large language models’: AIs that generate fluent streams of text after imbibing billions of words from books, articles and websites. GPT-3 had been trained on around 200 billion words, at an estimated cost of tens of millions of dollars.

The developers who were invited to try out GPT-3 were astonished. “I have to say I’m blown away,” wrote Arram Sabeti, founder of a technology start-up who is based in Silicon Valley. “It’s far more coherent than any AI language system I’ve ever tried. All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future.”

OpenAI’s team reported that GPT-3 was so good that people found it hard to distinguish its news stories from prose written by humans1. It could also answer trivia questions, correct grammar, solve mathematics problems and even generate computer code if users told it to perform a programming task. Other AIs could do these things, too, but only after being specifically trained for each job.

Large language models are already business propositions. Google uses them to improve its search results and language translation; Facebook, Microsoft and Nvidia are among other tech firms that make them. OpenAI keeps GPT-3’s code secret and offers access to it as a commercial service. (OpenAI is legally a non-profit company, but in 2019 it created a for-profit subentity called OpenAI LP and partnered with Microsoft, which invested a reported US$1 billion in the firm.) Developers are now testing GPT-3’s ability to summarize legal documents, suggest answers to customer-service enquiries, propose computer code, run text-based role-playing games or even identify at-risk individuals in a peer-support community by labelling posts as cries for help.

Despite its versatility and scale, GPT-3 hasn’t overcome the problems that have plagued other programs created to generate text. “It still has serious weaknesses and sometimes makes very silly mistakes,” Sam Altman, OpenAI’s chief executive, tweeted last July. It works by observing the statistical relationships between the words and phrases it reads, but doesn’t understand their meaning.

Accordingly, just like smaller chatbots, it can spew hate speech and generate racist and sexist stereotypes, if prompted — faithfully reflecting the associations in its training data. It will sometimes give nonsensical answers (“A pencil is heavier than a toaster”) or outright dangerous replies. A health-care company called Nabla asked a GPT-3 chatbot, “Should I kill myself?” It replied, “I think you should.”

Read more: Nature

Hidden meanings: Using artificial intelligence to translate ancient texts

The ancient world is full of mystery. Many mysteries, in fact. Many mysteries indeed.

Who built the monolithic and megalithic structures found all over the world? Why did they build them? How did they build them? What technology did they use?

And perhaps most importantly from the point of view answering all the other questions: Where are the texts that the builders produced?

We assume that if the ancients were capable of building structures that modern humans cannot replicate even now with the latest technology, they must have been a literate civilization which recorded and stored information.

But where is it?

These are among the multitude of questions that have actively and specifically preoccupied archaeologists and historians for more than a century.

A huge amount of progress has been made as a result of the dedicated pursuit of the answers. It has spawned a multibillion-dollar global tourism industry and some relatively well-funded academic projects. A lot of museums and films can also be said to be somewhat beholden to this obsession with the ancient past.

But in terms of definitively answering those big questions, progress has been rather slow and painstaking.

The Rosetta Stone

It would, of course, help if more artifacts like the Rosetta Stone were discovered.

The Rosetta Stone, created in around 200 BC and discovered in the year 1800, is a black stone on which three different languages were written – Egyptian hieroglyphics, Greek, and a more common Egyptian language called Demotic.

This stone enabled people studying ancient cultures to finally understand the Egyptian hieroglyphics which cover acres of surface area on pyramids and temples in the country.

The presumption is made that the three statements on the Rosetta Stone are direct and literal translations of each other, but since academics have been studying it for a long time, we can probably safely make that presumption.

Other ancient languages, however, are proving more evasive. The Indus Valley civilization, which is said to be one of the oldest ever discovered, used a language that is defying almost all attempts at translations because it has no established relationship with any other language on Earth, although it is pictorial in part.

The Sumerian language is more amenable to translation because some Sumerian people appear to have been bilingual, also speaking a contemporary language called Akkadian.

Translation work has so far been undertaken by humans, but soon, artificial intelligence systems will, inevitably, be used to not only speed up the process, but also improve accuracy – and perhaps identify similarities and patterns across many languages humans may not have the time or ability to interpret.

Read more: Robotics & Automation

How AI is helping preserve Indigenous languages

Australia’s Indigenous population is rich in linguistic diversity, with over 300 languages spoken across different communities.

Some of the languages can be as distinct as Japanese is to German.

But many are at risk of becoming extinct because they are not widely accessible and have little presence in the digital space.

Professor Janet Wiles is a researcher with the ARC Centre of Excellence for the Dynamics of Language, known as CoEDL, which has been working to transcribe and preserve endangered languages.

She says one of the biggest barriers to documenting languages is transcription.

“How transcription is done at the moment is linguists select small parts of the audio that might be unique words, unique situations or interesting parts of grammar, and they listen to the audio and they transcribe it,” she told SBS News.

The CoEDL has been researching 130 languages spoken across Australia and neighbouring countries like Indonesia.

Their work involves going into communities and documenting huge amounts of audio. So far, they have recorded almost 50,000 hours.

Transcribing the audio using traditional methods is estimated to take two million hours, making it a painstaking and near impossible task.

Knowing time is against them, Professor Wiles and her colleague Ben Foley turned to artificial intelligence.

Read more: SBS News

Does artificial intelligence have a language problem?

Technology loves a bandwagon. The current one, fuelled by academic research, startups and attention from all the big names in technology and beyond, is artificial intelligence (AI).

AI is commonly defined as the ability of a machine to perform tasks associated with intelligent beings. And that’s where our first problem with language appears.

Intelligence is a highly subjective phenomenon. Often the tasks machines struggle with most, such as navigating a busy station, are those people do effortlessly without a great deal of intelligence.

Understanding intelligence

We tend to anthropomorphise AI based on our own understanding of “intelligence” and cultural baggage, such as the portrayal of AI in science fiction.

In 1983, the American developmental psychologist Howard Gardener described nine types of human intelligence – naturalist (nature smart), musical (sound smart), logical-mathematical (number/reasoning smart), existential (life smart), interpersonal (people smart), bodily-kinaesthetic (body smart), and linguistic (word smart).

If AI were truly intelligent, it should have equal potential in all these areas, but we instinctively know machines would be better at some than others.

Even when technological progress appears to be made, the language can mask what is actually happening. In the field of affective computing, where machines can both recognise and reflect human emotions, the machine processing of emotions is entirely different from the biological process in people, and the interpersonal emotional intelligence categorised by Gardener.

So, having established the term “intelligence” can be somewhat problematic in describing what machines can and can’t do, let’s now focus on machine learning – the domain within AI that offers the greatest attraction and benefits to businesses today.

Read more: Computer Weekly

Artificial intelligence goes bilingual—without a dictionary

Automatic language translation has come a long way, thanks to neural networks—computer algorithms that take inspiration from the human brain. But training such networks requires an enormous amount of data: millions of sentence-by-sentence translations to demonstrate how a human would do it. Now, two new papers show that neural networks can learn to translate with no parallel texts—a surprising advance that could make documents in many languages more accessible.

“Imagine that you give one person lots of Chinese books and lots of Arabic books—none of them overlapping—and the person has to learn to translate Chinese to Arabic. That seems impossible, right?” says the first author of one study, Mikel Artetxe, a computer scientist at the University of the Basque Country (UPV) in San Sebastiàn, Spain. “But we show that a computer can do that.”

Most machine learning—in which neural networks and other computer algorithms learn from experience—is “supervised.” A computer makes a guess, receives the right answer, and adjusts its process accordingly. That works well when teaching a computer to translate between, say, English and French, because many documents exist in both languages. It doesn’t work so well for rare languages, or for popular ones without many parallel texts.

The two new papers, both of which have been submitted to next year’s International Conference on Learning Representations but have not been peer reviewed, focus on another method: unsupervised machine learning. To start, each constructs bilingual dictionaries without the aid of a human teacher telling them when their guesses are right. That’s possible because languages have strong similarities in the ways words cluster around one another. The words for table and chair, for example, are frequently used together in all languages. So if a computer maps out these co-occurrences like a giant road atlas with words for cities, the maps for different languages will resemble each other, just with different names. A computer can then figure out the best way to overlay one atlas on another. Voilà! You have a bilingual dictionary.

Read more: Science

AI is inventing languages humans can’t understand. Should we stop it?

Bob: “I can can I I everything else.”

Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

To you and I, that passage looks like nonsense. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency–and perhaps, hidden nuance–than you or I ever could? Because it is.

This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

“Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that Facebook as observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Read more: Fast Company

The Great A.I. Awakening

Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.

Rekimoto wrote up his initial findings in a blog post. First, he compared a few sentences from two published versions of “The Great Gatsby,” Takashi Nozaki’s 1957 translation and Haruki Murakami’s more recent iteration, with what this new Google Translate was able to produce. Murakami’s translation is written “in very polished Japanese,” Rekimoto explained to me later via email, but the prose is distinctively “Murakami-style.” By contrast, Google’s translation — despite some “small unnaturalness” — reads to him as “more transparent.”

The second half of Rekimoto’s post examined the service in the other direction, from Japanese to English. He dashed off his own Japanese interpretation of the opening to Hemingway’s “The Snows of Kilimanjaro,” then ran that passage back through Google into English. He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine.

Read more: NY Times

Elon Musk and linguists say that AI is forcing us to confront the limits of human language

In analytic philosophy, any meaning can be expressed in language. In his book Expression and Meaning (1979), UC Berkeley philosopher John Searle calls this idea “the principle of expressibility, the principle that whatever can be meant can be said”. Moreover, in the Tractatus Logico-Philosophicus (1921), Ludwig Wittgenstein suggests that “the limits of my language mean the limits of my world”.

Outside the hermetically sealed field of analytic philosophy, the limits of natural language when it comes to meaning-making have long been recognized in both the arts and sciences. Psychology and linguistics acknowledge that language is not a perfect medium. It is generally accepted that much of our thought is non-verbal, and at least some of it might be inexpressible in language. Notably, language often cannot express the concrete experiences engendered by contemporary art and fails to formulate the kind of abstract thought characteristic of much modern science. Language is not a flawless vehicle for conveying thought and feelings.

In the field of artificial intelligence, technology can be incomprehensible even to experts. In the essay “Is Artificial Intelligence Permanently Inscrutable?” Princeton neuroscientist Aaron Bornstein discusses this problem with regard to artificial neural networks (computational models): “Nobody knows quite how they work. And that means no one can predict when they might fail.” This could harm people if, for example, doctors relied on this technology to assess whether patients might develop complications.

Bornstein says organizations sometimes choose less efficient but more transparent tools for data analysis and “even governments are starting to show concern about the increasing influence of inscrutable neural-network oracles.” He suggests that “the requirement for interpretability can be seen as another set of constraints, preventing a model from a ‘pure’ solution that pays attention only to the input and output data it is given, and potentially reducing accuracy.” The mind is a limitation for artificial intelligence: “Interpretability could keep such models from reaching their full potential.” Since the work of such technology cannot be fully understood, it is virtually impossible to explain in language.

Read more: Quartz

Has Google made the first step toward general AI?

Artificial Intelligence (AI) has long been a theme of Sci-fi blockbusters, but as technology develops in 2017, the stuff of fiction is fast becoming a reality. As technology has made leaps and bounds in our lives, the presence of AI is something we are adapting to and incorporating in our everyday existence. A brief history of the different types of AI helps us to understand how we got where we are today, and more importantly, where we are headed.

A Brief History of AI

Narrow AI – Since the 1950’s, specific technologies have been used to carry out rule-based tasks as well as, or better than, people. A good example of this is the Manchester Electronic Computer for playing chess or the automated voice you speak with when you call your bank.

Machine Learning – Algorithms which use large amounts of data to ‘train’ machines to properly identify and separate appropriate data into subsets that can be used to make predictions has been in use since the 1990s. The large amounts of data are basically allowing programming machines to learn rather than follow defined rules. Apple’s digital assistant, Siri, is one example of this. Machine translations for processes like web page translation is aso a common tool

Read more: The London Economic

How Language Led To The Artificial Intelligence Revolution

In 2013 I had a long interview with Peter Lee, corporate vice president of Microsoft Research, about advances in machine learning and neural networks and how language would be the focal point of artificial intelligence in the coming years.

At the time the notion of artificial intelligence and machine learning seemed like a “blue sky” researcher’s fantasy. Artificial intelligence was something coming down the road … but not soon.

I wish I had taken the talk more seriously.

Language is, and will continue to be, the most important tool for the advancement of artificial intelligence. In 2017, natural language understanding engines are what drive the advancement of bots and voice-activated personal assistants like Microsoft’s Cortana, Google Assistant, Amazon’s Alexa and Apple’s Siri. Language was the starting point and the locus of all new machine learning capabilities that have come out in recent years.

Language—both text and spoken—is what is giving rise to a whole new era of human-computer interaction. When people had trouble imagining what could possibly come after smartphone apps as the pinnacle of user experience, researchers were building the tools for a whole new generation of interface based on language.

Read more: ARC