How We Extract Meaning From Language

Much has been said about the remarkable ability humans have to extract meaning from language. That ability is remarkable indeed. In a split second, we perceive a spoken or written word and immediately assign meaning to it while concatenating (stringing together) the various words in a sentence. At the end of the comprehension process, the whole sentence is greater than the sum of its parts.

It is an ability that almost comes naturally, at least for spoken words. Some 353,000 babies born today will understand some basic words about nine months from now. Around their first birthday, they will learn their first words, often words like “mummy” or “daddy.” Around their second birthday, they will be putting words together in their first sentences.

And if those first sentences don’t end up occurring around the second birthday, there is likely no reason for concern. Odds are that they will get there sooner or later and become the talkative creatures that we are. Almost any child, anywhere in the world, can learn how to understand and speak a language—one of the 6,500 languages in the world.

How Do We Acquire Language?

The big question cognitive scientists have been struggling with—linguists, psychologists, educators, anthropologists, philosophers, and AI researchers alike—is how language is acquired so naturally. A century of research has not provided a definitive answer. However, parts of an answer have been proposed.

Some researchers in the early part of the 20th century argued that language acquisition is not much more than verbal behavior training. Just like a pigeon learning to make a pirouette—step by step, when provided with some reward—children learn how to speak. Rather than an edible reward, the attention and praise from parents could be considered the perfect reinforcers for acquiring our remarkable language processing skills.

Other researchers in the 1950s vigorously rejected the idea that language is acquired through training. Language-learning children, they argued, are not pirouette-making pigeons. Instead, the language-learning child already has a “language mind” on its own. It does not start out with a blank slate, but instead has a built-in instinct for understanding and speaking. It is because of this preconditioned language acquisition device that children can acquire language so rapidly and effortlessly. Language acquisition is not nurture, they argue; it is nature.

The nature argument can also be found in a prominent explanation three decades after. The enthusiasm about the emergence of computers convinced some researchers that the human brain must also be like a computer. If computers and the human brain can both understand language, the brain must use the same computational architecture. Language acquisition was thus seen as a combination of nature and nurture. Nature provided the neural network architecture that could be trained on the nurturing linguistic input.

Over the last two decades, however, concerns have been raised about the analogy of the human mind as a computer, crunching linguistic symbols into other linguistic symbols. Instead, the argument went, meaning can only come from linking linguistic symbols to perceptual information. That is, language must be grounded to be meaningful.

The meaning for the word “dog” does not come from the fact that it may occur in the same sentence with “cat,” as a computer may compute. Instead, the meaning of the word comes from the fact that in the mind’s eye (and ear, nose, and hand), we can see the four-legged animal, mentally hear its barking, imagine its particular dog smell, and picture what it feels like to pet it. That is how language attains meaning.

Read more: Psychology Today

Leave a Reply

Your email address will not be published. Required fields are marked *

2 × 1 =