As consumers interact with AI like Alexa, Siri and Cortana – not to mention brand chatbots – more and more, human language will change. That was the topic of conversation at a recent panel during Social Media Week that asked in part whether technology will corrupt language.
AI is also changing our relationship with technology, particularly among children who grow up with voice-enabled devices, and so the key for brands and marketers may very well be figuring out how to give robots more human-like speech, as well as to make them more empathetic. But that may also be easier said than done.
History repeats itself
Erin McKean, founder of online English dictionary Wordnik, noted the anxiety about technology changing language is nothing new. Greek philosopher Plato was against people writing things down because he thought it would ruin our memories, for example. Since then, virtually every innovation since – the printing press, radio, the telegraph, TV, movies and the internet – have all been accused of killing language, according to McKean.
“The telegraph is a great example with modern parallels,” added Ben Zimmer, language columnist for the Wall Street Journal. “[Philosopher and poet Henry David] Thoreau thought it would help us communicate more quickly, but we’d have nothing to say. It stripped down language, causing language to be used in a very functional way.”
Read more: The Drum
The dream of building computers or robots that communicate like humans has been with us for many decades now. And if market trends and investment levels are any guide, it’s something we would really like to have. MarketsandMarkets says the natural language processing (NLP) industry will be worth $16.07 billion by 2021, growing at a rate of 16.1 percent, and deep learning is estimated to reach $1.7 billion by 2022, growing at a CAGR of 65.3 percent between 2016 and 2022.
Of course, if you’ve played with any chatbots, you will know that it’s a promise that is yet to be fulfilled. There’s an “uncanny valley” where, at one end, we sense we’re not talking to a real person and, at the other end, the machine just doesn’t “get” what we mean.
For example, when using a fun weather bot like Poncho I may ask, “If I go outside, what should I wear?” The bot responds, “Oops, I didn’t catch that. For things I can help you with, type ‘help’.”
Yet, when I ask, “If I go outside, should I take an umbrella?,” the bot’s almost too-clever response is “Nah, you won’t need your umbrella in Santa Clara, CA.”
Read more: Venture Beat
Google Translate is getting brainier. The online translation tool recently started using a neural network to translate between some of its most popular languages – and the system is now so clever it can do this for language pairs on which it has not been explicitly trained. To do this, it seems to have created its own artificial language.
Traditional machine-translation systems break sentences into words and phrases, and translate each individually. In September, Google Translate unveiled a new system that uses a neural network to work on entire sentences at once, giving it more context to figure out the best translation. This system is now in action for eight of the most common language pairs on which Google Translate works.
Although neural machine-translation systems are fast becoming popular, most only work on a single pair of languages, so different systems are needed to translate between others. With a little tinkering, however, Google has extended its system so that it can handle multiple pairs – and it can translate between two languages when it hasn’t been directly trained to do so.
For example, if the neural network has been taught to translate between English and Japanese, and English and Korean, it can also translate between Japanese and Korean without first going through English. This capability may enable Google to quickly scale the system to translate between a large number of languages.
“This is a big advance,” says Kyunghyun Cho at New York University. His team and another group at Karlsruhe Institute of Technology in Germany have independently published similar studies working towards neural translation systems that can handle multiple language combinations.
Read more: New Scientist
Artificial intelligence is difficult to develop because real intelligence is mysterious. This mystery manifests in language, or “the dress of thought” as the writer Samuel Johnson put it, and language remains a major challenge to the development of artificial intelligence.
“There’s no way you can have an AI system that’s humanlike that doesn’t have language at the heart of it,” Josh Tenenbaum, a professor of cognitive science and computation at MIT told Technology Review in August.
In September, Google announced that its Neural Machine Translation (GNMT) system can now “in some cases” produce translations that are “nearly indistinguishable” from those of humans. Still, it noted:
“Machine translation is by no means solved. GNMT can still make significant errors that a human translator would never make, like dropping words and mistranslating proper names or rare terms, and translating sentences in isolation rather than considering the context of the paragraph or page.”
In other words, the machine doesn’t entirely get how words work yet.
Read more: Quartz
The concept of artificial intelligence has been around for a long time.
We’re all familiar with HAL 9000 from 2001: A Space Odyssey, C-3PO from Star Wars and, more recently, Samantha from Her. In written fiction, AI characters show up in stories from writers like Philip K. Dick, William Gibson and Isaac Asimov. Sometimes it seems like it’s touched on by every writer who has written sci-fi.
While many predictions and ideas put forward in sci-fi have come to life, artificial intelligence is probably the furthest behind. We are nowhere near true artificial intelligence as exemplified by the characters mentioned above.
Read more: TechCrunch
Deep inside a spaghetti bowl of suburban Maryland streets this weekend, I turned to a trusted guide for directions back to the highway. But this guide couldn’t figure out what direction we were going, and she asked me to perform nonsensical maneuvers. “Actually, don’t listen to Siri—she can’t figure out where we are,” said a friend in the passenger seat.
Referring to Apple’s artificial-intelligence (AI) assistant as “she” feels natural because of Siri’s female-sounding voice. Although Siri herself will tell you she is neither male nor female—“I exist beyond your human concept of gender”—her relatively natural-sounding voice elicits a warmer response than a stilted robot voice would.
There’s an option buried in every iPhone to make Siri speak in a male-sounding voice (or in a British or an Australian accent), but Siri is not “he” by default for a reason: Research shows that people react more positively to female voices.
Read more: The Atlantic