AI is inventing languages humans can’t understand. Should we stop it?

July 15th, 2017 by Bob: “I can can I I everything else.” Alice: “Balls have zero to me to me to me to me to me to me to me to me to.” To you and I, that passage looks like nonsense. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency–and perhaps, hidden nuance–than you or I ever could? Because it is. This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming. “There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences. “Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that Facebook as observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.” Read more: Fast Company

Elon Musk and linguists say that AI is forcing us to confront the limits of human language

June 14th, 2017 by In analytic philosophy, any meaning can be expressed in language. In his book Expression and Meaning (1979), UC Berkeley philosopher John Searle calls this idea “the principle of expressibility, the principle that whatever can be meant can be said”. Moreover, in the Tractatus Logico-Philosophicus (1921), Ludwig Wittgenstein suggests that “the limits of my language mean the limits of my world”. Outside the hermetically sealed field of analytic philosophy, the limits of natural language when it comes to meaning-making have long been recognized in both the arts and sciences. Psychology and linguistics acknowledge that language is not a perfect medium. It is generally accepted that much of our thought is non-verbal, and at least some of it might be inexpressible in language. Notably, language often cannot express the concrete experiences engendered by contemporary art and fails to formulate the kind of abstract thought characteristic of much modern science. Language is not a flawless vehicle for conveying thought and feelings. In the field of artificial intelligence, technology can be incomprehensible even to experts. In the essay “Is Artificial Intelligence Permanently Inscrutable?” Princeton neuroscientist Aaron Bornstein discusses this problem with regard to artificial neural networks (computational models): “Nobody knows quite how they work. And that means no one can predict when they might fail.” This could harm people if, for example, doctors relied on this technology to assess whether patients might develop complications. Bornstein says organizations sometimes choose less efficient but more transparent tools for data analysis and “even governments are starting to show concern about the increasing influence of inscrutable neural-network oracles.” He suggests that “the requirement for interpretability can be seen as another set of constraints, preventing a model from a ‘pure’ solution that pays attention only to the input and output data it is given, and potentially reducing accuracy.” The mind is a limitation for artificial intelligence: “Interpretability could keep such models from reaching their full potential.” Since the work of such technology cannot be fully understood, it is virtually impossible to explain in language. Read more: Quartz

Has Google made the first step toward general AI?

June 7th, 2017 by Artificial Intelligence (AI) has long been a theme of Sci-fi blockbusters, but as technology develops in 2017, the stuff of fiction is fast becoming a reality. As technology has made leaps and bounds in our lives, the presence of AI is something we are adapting to and incorporating in our everyday existence. A brief history of the different types of AI helps us to understand how we got where we are today, and more importantly, where we are headed. A Brief History of AI Narrow AI – Since the 1950’s, specific technologies have been used to carry out rule-based tasks as well as, or better than, people. A good example of this is the Manchester Electronic Computer for playing chess or the automated voice you speak with when you call your bank. Machine Learning – Algorithms which use large amounts of data to ‘train’ machines to properly identify and separate appropriate data into subsets that can be used to make predictions has been in use since the 1990s. The large amounts of data are basically allowing programming machines to learn rather than follow defined rules. Apple’s digital assistant, Siri, is one example of this. Machine translations for processes like web page translation is aso a common tool Read more: The London Economic

How Language Led To The Artificial Intelligence Revolution

June 3rd, 2017 by In 2013 I had a long interview with Peter Lee, corporate vice president of Microsoft Research, about advances in machine learning and neural networks and how language would be the focal point of artificial intelligence in the coming years. At the time the notion of artificial intelligence and machine learning seemed like a “blue sky” researcher’s fantasy. Artificial intelligence was something coming down the road … but not soon. I wish I had taken the talk more seriously. Language is, and will continue to be, the most important tool for the advancement of artificial intelligence. In 2017, natural language understanding engines are what drive the advancement of bots and voice-activated personal assistants like Microsoft’s Cortana, Google Assistant, Amazon’s Alexa and Apple’s Siri. Language was the starting point and the locus of all new machine learning capabilities that have come out in recent years. Language—both text and spoken—is what is giving rise to a whole new era of human-computer interaction. When people had trouble imagining what could possibly come after smartphone apps as the pinnacle of user experience, researchers were building the tools for a whole new generation of interface based on language. Read more: ARC

The Race to Create AI-Enabled, Natural-Language and Voice Interface Platforms

May 4th, 2017 by Did you ever stop to wonder: What is Amazon not doing with technology? These days, you’d be hard-pressed to answer that question, given the company’s incessant schedule for announcing updates and new products. The Seattle-based e-commerce giant is seemingly everywhere—whether it’s the latest cloud offerings in AWS, new entertainment shows on Prime, automated retail stores, leased fleets of Boeing jets, smart speakers, payment systems, autonomous cars and trucks, freight forwarding companies, or airborne warehouses. Amazon also happens to have warehouses within 20 miles of 44% of the population of the United States, according to Piper Jaffray analyst Gene Munster. In many of the company’s recent announcements, Amazon’s voice assistant Alexa plays a central role. It’s clear that artificial intelligence-enabled, natural-language, voie recognition systems are going to be even more important to Amazon in the future. In fact, company CEO Jeff Bezos says Alexa could be the fourth pillar of its business. Complementing Amazon’s retail marketplace, AWS and Amazon Prime, Alexa and its 10,000-plus “skills” could become one of Amazon’s strategic initiatives. Not only is it important to Amazon, but all the major tech companies are gearing up for a major competitive battle in this evolving platform war. There are a range of personal assistants from the likes of Google (Google Now), Microsoft (Cortana), Apple (Siri) and Samsung (Bixby), as well as Watson, the natural language cognitive computing platform from IBM. Read more: Internet of Things Institute

AI Systems Are Learning to Communicate With Humans

May 3rd, 2017 by In the future, service robots equipped with artificial intelligence (AI) are bound to be a common sight. These bots will help people navigate crowded airports, serve meals, or even schedule meetings. As these AI systems become more integrated into daily life, it is vital to find an efficient way to communicate with them. It is obviously more natural for a human to speak in plain language rather than a string of code. Further, as the relationship between humans and robots grows, it will be necessary to engage in conversations, rather than just give orders. This human-robot interaction is what Manuela M. Veloso’s research is all about. Veloso, a professor at Carnegie Mellon University, has focused her research on CoBots, autonomous indoor mobile service robots which transport items, guide visitors to building locations, and traverse the halls and elevators. The CoBot robots have been successfully autonomously navigating for several years now, and have traveled more than 1,000km. These accomplishments have enabled the research team to pursue a new direction, focusing now on novel human-robot interaction. “If you really want these autonomous robots to be in the presence of humans and interacting with humans, and being capable of benefiting humans, they need to be able to talk with humans” Veloso says. Read more: Futurism

What the Evolution of Language in Humans Means for AI

April 12th, 2017 by In his new book, Agreement Beyond Phi, MIT linguist Shigeru Miyagawa explores the concept of universal languages by analyzing similarities between a range of languages. This is a topic in linguistics that gives a fresh take on the science of words and their construction, and Miyagawa is hopeful that his research and analysis will allow him to apply his theory to more languages out of the Indo-European spectrum. In his book, he makes the argument that all languages have allocutive agreement, which is defined as “a morphological feature in which the gender of an addressee is marked overtly in an utterance using fully grammaticalized markers.” For the non-linguists out there, it’s kind of like a form of subject-verb agreement that allows for formality to change the way that a verb is spoken. When speaking French, for example, to say to friend that “he has” something would be “tu as” but for a professor, or doctor, the polite version would be “vous avez.” This is notable in Basque, as well as Japanese, which has a certain method of “politeness making.” In the book, he details the similarities between languages like French, English, Basque, Japanese, Dinka, and Jingpo, but he would like to continue exploring further. Read more: Edgy Labs

Will robots destroy human language?

March 16th, 2017 by As consumers interact with AI like Alexa, Siri and Cortana – not to mention brand chatbots – more and more, human language will change. That was the topic of conversation at a recent panel during Social Media Week that asked in part whether technology will corrupt language. AI is also changing our relationship with technology, particularly among children who grow up with voice-enabled devices, and so the key for brands and marketers may very well be figuring out how to give robots more human-like speech, as well as to make them more empathetic. But that may also be easier said than done. History repeats itself Erin McKean, founder of online English dictionary Wordnik, noted the anxiety about technology changing language is nothing new. Greek philosopher Plato was against people writing things down because he thought it would ruin our memories, for example. Since then, virtually every innovation since – the printing press, radio, the telegraph, TV, movies and the internet – have all been accused of killing language, according to McKean. “The telegraph is a great example with modern parallels,” added Ben Zimmer, language columnist for the Wall Street Journal. “[Philosopher and poet Henry David] Thoreau thought it would help us communicate more quickly, but we’d have nothing to say. It stripped down language, causing language to be used in a very functional way.” Read more: The Drum

How Silicon Valley is teaching language to machines

January 27th, 2017 by The dream of building computers or robots that communicate like humans has been with us for many decades now. And if market trends and investment levels are any guide, it’s something we would really like to have. MarketsandMarkets says the natural language processing (NLP) industry will be worth $16.07 billion by 2021, growing at a rate of 16.1 percent, and deep learning is estimated to reach $1.7 billion by 2022, growing at a CAGR of 65.3 percent between 2016 and 2022. Of course, if you’ve played with any chatbots, you will know that it’s a promise that is yet to be fulfilled. There’s an “uncanny valley” where, at one end, we sense we’re not talking to a real person and, at the other end, the machine just doesn’t “get” what we mean. For example, when using a fun weather bot like Poncho I may ask, “If I go outside, what should I wear?” The bot responds, “Oops, I didn’t catch that. For things I can help you with, type ‘help’.” Yet, when I ask, “If I go outside, should I take an umbrella?,” the bot’s almost too-clever response is “Nah, you won’t need your umbrella in Santa Clara, CA.” Read more: Venture Beat

Google Translate AI invents its own language to translate with

December 1st, 2016 by Google Translate is getting brainier. The online translation tool recently started using a neural network to translate between some of its most popular languages – and the system is now so clever it can do this for language pairs on which it has not been explicitly trained. To do this, it seems to have created its own artificial language. Traditional machine-translation systems break sentences into words and phrases, and translate each individually. In September, Google Translate unveiled a new system that uses a neural network to work on entire sentences at once, giving it more context to figure out the best translation. This system is now in action for eight of the most common language pairs on which Google Translate works. Although neural machine-translation systems are fast becoming popular, most only work on a single pair of languages, so different systems are needed to translate between others. With a little tinkering, however, Google has extended its system so that it can handle multiple pairs – and it can translate between two languages when it hasn’t been directly trained to do so. For example, if the neural network has been taught to translate between English and Japanese, and English and Korean, it can also translate between Japanese and Korean without first going through English. This capability may enable Google to quickly scale the system to translate between a large number of languages. “This is a big advance,” says Kyunghyun Cho at New York University. His team and another group at Karlsruhe Institute of Technology in Germany have independently published similar studies working towards neural translation systems that can handle multiple language combinations. Read more: New Scientist

Machines may never master the distinctly human elements of language

November 6th, 2016 by Artificial intelligence is difficult to develop because real intelligence is mysterious. This mystery manifests in language, or “the dress of thought” as the writer Samuel Johnson put it, and language remains a major challenge to the development of artificial intelligence. “There’s no way you can have an AI system that’s humanlike that doesn’t have language at the heart of it,” Josh Tenenbaum, a professor of cognitive science and computation at MIT told Technology Review in August. In September, Google announced that its Neural Machine Translation (GNMT) system can now “in some cases” produce translations that are “nearly indistinguishable” from those of humans. Still, it noted: “Machine translation is by no means solved. GNMT can still make significant errors that a human translator would never make, like dropping words and mistranslating proper names or rare terms, and translating sentences in isolation rather than considering the context of the paragraph or page.” In other words, the machine doesn’t entirely get how words work yet. Read more: Quartz

Artificial intelligence and language

March 16th, 2016 by The concept of artificial intelligence has been around for a long time. We’re all familiar with HAL 9000 from 2001: A Space Odyssey, C-3PO from Star Wars and, more recently, Samantha from Her. In written fiction, AI characters show up in stories from writers like Philip K. Dick, William Gibson and Isaac Asimov. Sometimes it seems like it’s touched on by every writer who has written sci-fi. While many predictions and ideas put forward in sci-fi have come to life, artificial intelligence is probably the furthest behind. We are nowhere near true artificial intelligence as exemplified by the characters mentioned above. Read more: TechCrunch‎