How We Extract Meaning From Language

Much has been said about the remarkable ability humans have to extract meaning from language. That ability is remarkable indeed. In a split second, we perceive a spoken or written word and immediately assign meaning to it while concatenating (stringing together) the various words in a sentence. At the end of the comprehension process, the whole sentence is greater than the sum of its parts.

It is an ability that almost comes naturally, at least for spoken words. Some 353,000 babies born today will understand some basic words about nine months from now. Around their first birthday, they will learn their first words, often words like “mummy” or “daddy.” Around their second birthday, they will be putting words together in their first sentences.

And if those first sentences don’t end up occurring around the second birthday, there is likely no reason for concern. Odds are that they will get there sooner or later and become the talkative creatures that we are. Almost any child, anywhere in the world, can learn how to understand and speak a language—one of the 6,500 languages in the world.

How Do We Acquire Language?

The big question cognitive scientists have been struggling with—linguists, psychologists, educators, anthropologists, philosophers, and AI researchers alike—is how language is acquired so naturally. A century of research has not provided a definitive answer. However, parts of an answer have been proposed.

Some researchers in the early part of the 20th century argued that language acquisition is not much more than verbal behavior training. Just like a pigeon learning to make a pirouette—step by step, when provided with some reward—children learn how to speak. Rather than an edible reward, the attention and praise from parents could be considered the perfect reinforcers for acquiring our remarkable language processing skills.

Other researchers in the 1950s vigorously rejected the idea that language is acquired through training. Language-learning children, they argued, are not pirouette-making pigeons. Instead, the language-learning child already has a “language mind” on its own. It does not start out with a blank slate, but instead has a built-in instinct for understanding and speaking. It is because of this preconditioned language acquisition device that children can acquire language so rapidly and effortlessly. Language acquisition is not nurture, they argue; it is nature.

The nature argument can also be found in a prominent explanation three decades after. The enthusiasm about the emergence of computers convinced some researchers that the human brain must also be like a computer. If computers and the human brain can both understand language, the brain must use the same computational architecture. Language acquisition was thus seen as a combination of nature and nurture. Nature provided the neural network architecture that could be trained on the nurturing linguistic input.

Over the last two decades, however, concerns have been raised about the analogy of the human mind as a computer, crunching linguistic symbols into other linguistic symbols. Instead, the argument went, meaning can only come from linking linguistic symbols to perceptual information. That is, language must be grounded to be meaningful.

The meaning for the word “dog” does not come from the fact that it may occur in the same sentence with “cat,” as a computer may compute. Instead, the meaning of the word comes from the fact that in the mind’s eye (and ear, nose, and hand), we can see the four-legged animal, mentally hear its barking, imagine its particular dog smell, and picture what it feels like to pet it. That is how language attains meaning.

Read more: Psychology Today

Found in translation: USC scientists map brain responses to stories in three different languages

New brain research by USC scientists shows that reading stories is a universal experience that may result in people feeling greater empathy for each other, regardless of cultural origins and differences.

And in what appears to be a first for neuroscience, USC researchers have found patterns of brain activation when people find meaning in stories, regardless of their language. Using functional MRI, the scientists mapped brain responses to narratives in three different languages — Americanized English, Farsi and Mandarin Chinese.

The USC study opens up the possibility that exposure to narrative storytelling can have a widespread effect on triggering better self-awareness and empathy for others, regardless of the language or origin of the person being exposed to it.

“Even given these fundamental differences in language, which can be read in a different direction or contain a completely different alphabet altogether, there is something universal about what occurs in the brain at the point when we are processing narratives,” said Morteza Dehghani, the study’s lead author and a researcher at the Brain and Creativity Institute at USC.

Dehghani is also an assistant professor of psychology at the USC Dornsife College of Letters, Arts and Sciences, and an assistant professor of computer science at the USC Viterbi School of Engineering.

The study was published in the journal Human Brain Mapping.

Read more: USC News

The link between language and cognition is a red herring

Scientists working on animal cognition often dwell on their desire to talk to the animals. Oddly enough, this particular desire must have passed me by, because I have never felt it. I am not waiting to hear what my animals have to say about themselves, taking the rather Wittgensteinian position that their message might not be all that enlightening. Even with respect to my fellow humans, I am dubious that language tells us what is going on in their heads. I am surrounded by colleagues who study members of our species by presenting them with questionnaires. They trust the answers they receive and have ways, they assure me, of checking their veracity. But who says that what people say about themselves reveals actual emotions and motivations?

This might be true for simple attitudes free from moralisations (‘What is your favourite music?’), but it seems almost pointless to ask people about their love life, eating habits, or treatment of others (‘Are you pleasant to work with?’). It is far too easy to invent post-hoc reasons for one’s behaviour, to be silent about one’s sexual habits, to downplay excessive eating or drinking, or to present oneself as more admirable than one really is.

No one is going to admit to murderous thoughts, stinginess or being a jerk. People lie all the time, so why would they stop in front of a psychologist who writes down everything they say? In one study, female college students reported more sex partners when they were hooked up to a fake lie-detector machine, demonstrating that they had been lying when interviewed without the lie-detector. I am in fact relieved to work with subjects that don’t talk. I don’t need to worry about the truth of their utterances. Instead of asking them how often they engage in sex, I just count the occasions. I am perfectly happy being an animal watcher.

Read more: Aeon

Synaesthesia could help us understand how the brain processes language

When we speak, listen, read, or write, almost all of the language processing that happens in our brains goes on below the level of conscious awareness. We might be aware of grasping for a particular forgotten word, but we don’t actively think about linguistic concepts like morphemes (the building blocks of words, like the past tense morpheme “-ed”).

Psycholinguists try to delve under the surface to figure out what’s actually going on in the brain, and how well this matches up with our theoretical ideas of how languages fit together. For instance, linguists talk about morphemes like “-ed”, but do our brains actually work with morphemes when we’re producing or interpreting language? That is, do theoretical linguistic concepts have any psychological reality? An upcoming paper in the journal Cognition suggests an unusual way to investigate this: by testing synaesthetes.

Read more: The Guardian

How the Human Brain Reads – In Any Language

Researchers at the University of Connecticut and their colleagues have found that what happens inside the brain when reading is the same no matter what the structure of a person’s written language, and that it is influenced by the same mechanism the brain uses to develop speech.

The study, based on evidence from functional MRI (fMRI) images of participants using four contrasting languages, was published in the Proceedings of the National Academy of Sciences in November.

Lead author Jay Rueckl, director of UConn’s Brain Imaging Research Center, says ‘reading is parasitic to speech’ – meaning that the brain’s activity while reading involves speech and print mechanisms rather than a simple response evoked by print alone.

Read more: UConn Today

Different Written Languages Are Equally as Efficient at Conveying Meaning

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

Read more: Neuroscience News