In a Momentous Discovery, Scientists Show Neanderthals Could Produce Human-Like Speech

Our Neanderthal cousins had the capacity to both hear and produce the speech sounds of modern humans, a new study has found.

Based on a detailed analysis and digital reconstruction of the structure of the bones in their skulls, the study settles one aspect of a decades-long debate over the linguistic capabilities of Neanderthals.

“This is one of the most important studies I have been involved in during my career,” said palaeoanthropologist Rolf Quam of Binghamton University.

“The results are solid and clearly show the Neanderthals had the capacity to perceive and produce human speech. This is one of the very few current, ongoing research lines relying on fossil evidence to study the evolution of language, a notoriously tricky subject in anthropology.”

The notion that Neanderthals (Homo neanderthalis) were much more primitive than modern humans (Homo sapiens) is outdated, and in recent years a growing body of evidence demonstrates that they were much more intelligent than we once assumed. They developed technologycrafted toolscreated art and held funerals for their dead.

Whether they actually spoke with each other, however, has remained a mystery. Their complex behaviors seem to suggest that they would have had to be able to communicate, but some scientists have contended that only modern humans have ever had the mental capacity for complex linguistic processes.

Whether that’s the case is going to be very difficult to prove one way or another, but the first step would be to determine if Neanderthals could produce and perceive sounds in the optimal range for speech-based communication.

Read more: Science Alert

How does the brain process speech? We now know the answer, and it’s fascinating

Neuroscientists have known that speech is processed in the auditory cortex for some time, along with some curious activity within the motor cortex. How this last cortex is involved though, has been something of a mystery, until now. A new study by two NYU scientists reveals one of the last holdouts to a process of discovery which started over a century and a half ago. In 1861, French neurologist Pierre Paul Broca identified what would come to be known as “Broca’s area.” This is a region in the posterior inferior frontal gyrus.

This area is responsible for processing and comprehending speech, as well as producing it. Interestingly, a fellow scientist, whom Broca had to operate on, was post-op missing Broca’s area entirely. Yet, he was still able to speak. He couldn’t initially make complex sentences, however, but in time regained all speaking abilities. This meant another region had pitched in, and a certain amount of neuroplasticity was involved.

In 1871, German neurologist Carl Wernicke discovered another area responsible for processing speech through hearing, this time in the superior posterior temporal lobe. It’s now called Wernicke’s area. The model was updated in 1965 by the eminent behavioral neurologist, Norman Geschwind. The updated map of the brain is known as the Wernicke-Geschwind model.

Wernicke and Broca gained their knowledge through studying patients with damage to certain parts of the brain. In the 20th century, electrical brain stimulation began to give us an even greater understanding of the brain’s inner workings. Patients undergoing brain surgery in the mid-century were given weak electrical brain stimulation. The current allowed surgeons to avoid damaging critically important areas. But it also gave them more insight into what areas controlled what functions.

With the advent of the fMRI and other scanning technology, we were able to look at the activity in regions of the brain and how language travels across them. We now know that impulses associated with language go between Boca’s and Wernicke’s areas. Communication between the two help us understand grammar, how words sound, and their meaning. Another region, the fusiform gyrus, helps us classify words.

Read more: Big Think

Your Speech Is Packed With Misunderstood, Unconscious Messages

Imagine standing up to give a speech in front of a critical audience. As you do your best to wax eloquent, someone in the room uses a clicker to conspicuously count your every stumble, hesitation, um and uh; once you’ve finished, this person loudly announces how many of these blemishes have marred your presentation.

This is exactly the tactic used by the Toastmasters public-speaking club, in which a designated “Ah Counter” is charged with tallying up the speaker’s slip-ups as part of the training regimen. The goal is total eradication. The club’s punitive measures may be extreme, but they reflect the folk wisdom that ums and uhs betray a speaker as weak, nervous, ignorant, and sloppy, and should be avoided at all costs, even in spontaneous conversation.

Many scientists, though, think that our cultural fixation with stamping out what they call “disfluencies” is deeply misguided. Saying um is no character flaw, but an organic feature of speech; far from distracting listeners, there’s evidence that it focuses their attention in ways that enhance comprehension.

Disfluencies arise mainly because of the time pressures inherent in speaking. Speakers don’t pre-plan an entire sentence and then mentally press “play” to begin unspooling it. If they did, they’d probably need to pause for several seconds between each sentence as they assembled it, and it’s doubtful that they could hold a long, complex sentence in working memory. Instead, speakers talk and think at the same time, launching into speech with only a vague sense of how the sentence will unfold, taking it on faith that by the time they’ve finished uttering the earlier portions of the sentence, they’ll have worked out exactly what to say in the later portions. Mostly, the timing works out, but occasionally it takes longer than expected to find the right phrase. Saying “um” is the speaker’s way of signaling that processing is ongoing, the verbal equivalent of a computer’s spinning circle. People sometimes have more disfluencies while speaking in public, ironically, because they are trying hard not to misspeak.

Read more: Nautilus

The Brain Has Its Own “Autofill” Function for Speech

The world is an unpredictable place. But the brain has evolved a way to cope with the everyday uncertainties it encounters—it doesn’t present us with many of them, but instead resolves them as a realistic model of the world. The body’s central controller predicts every contingency, using its stored database of past experiences, to minimize the element of surprise. Take vision, for example: We rarely see objects in their entirety but our brains fill in the gaps to make a best guess at what we are seeing—and these predictions are usually an accurate reflection of reality.

The same is true of hearing, and neuroscientists have now identified a predictive textlike brain mechanism that helps us to anticipate what is coming next when we hear someone speaking. The findings, published this week in PLoS Biology, advance our understanding of how the brain processes speech. They also provide clues about how language evolved, and could even lead to new ways of diagnosing a variety of neurological conditions more accurately.

The new study builds on earlier findings that monkeys and human infants can implicitly learn to recognize artificial grammar, or the rules by which sounds in a made-up language are related to one another. Neuroscientist Yukiko Kikuchi of Newcastle University in England and her colleagues played sequences of nonsense speech sounds to macaques and humans. Consistent with the earlier findings, Kikuchi and her team found both species quickly learned the rules of the language’s artificial grammar. After this initial learning period the researchers played more sound sequences—some of which violated the fabricated grammatical rules. They used microelectrodes to record responses from hundreds of individual neurons as well as from large populations of neurons that process sound information. In this way they were able to compare the responses with both types of sequences and determine the similarities between the two species’ reactions.

Read more: Scientific American

Understanding speech not just a matter of believing one’s ears

Even if we just hear part of what someone has said, when we are familiar with the context, we automatically add the missing information ourselves. Researchers from the Max Planck Institute for Empirical Aesthetics in Frankfurt and the Max Planck Institute for Cognitive and Brain Sciences in Leipzig have now succeeded in demonstrating how we do this.

Incomplete utterances are something we constantly encounter in everyday communication. Individual sounds and occasionally entire words fall victim to fast speech rates or imprecise articulation. In poetic language, omissions are used as a stylistic device or are the necessary outcome of the use of regular metre or rhyming syllables. In both cases, our comprehension of the spoken content is only slightly impaired or, in most cases, not affected at all.

The results of previous linguistic research suggest that language is particularly resilient to omissions when the linguistic information can be predicted both in terms of its content and phonetics. The most probable ending to the sentence: “The fisherman was in Norway and caught a sal…” is the word “salmon”. Accordingly, due to its predictability, this sentence-ending word should be able to accommodate the omission of the “m” and “on” sounds.

Read more: Medical Xpress

Finding iconicity in spoken languages

Have you ever wondered why we call a dog a dog and not a cat? Is this an arbitrary decision, or is it based on iconicity—the resemblance between word structure and meaning? New research shows that for Indo-European languages, like English and Spanish, iconicity is more common than previously believed.

The results are important for understanding the nature of human language, explains Lynn Perry, assistant professor of psychology in the University of Miami College of Arts & Sciences and co-lead author of the study.

“Many linguists are trained to believe that languages are arbitrary.” Perry said. “But sometimes what we as scientists accept as fact leads us to miss out the rich details of experiences,” she said. “We treat learning as this impossibly difficult process because we assume languages are completely arbitrary, but it turns out there’s a lot of structure and information in the language itself that could be making learning easier.”

The study is the first to show that iconicity is prevalent across the vocabulary of a spoken language, explained Marcus Perlman, postdoctoral research associate in the University of Wisconsin-Madison Department of Psychology and co-lead author of the study.

Read more: phys.org