Before COVID-19, my colleagues Dr. Maureen Muller and Tai Ahu and I conducted research for Te Mātāwai focusing on factors that enable and inhibit Māori from learning and using te reo Māori.
More than 1000 participants responded to our survey, and 57 Māori were interviewed across Aotearoa. From those who had not yet begun learning through to those with conversational proficiency, the main barriers were consistent: feeling whakamā about their language use, not having enough people to speak with, and having limited time and resources.
One of the things ancestral language learners (those with whakapapa Māori learning te reo) commonly express is that, through the process of engaging in te reo Māori, a high level of emotional vulnerability is experienced.
Māori language engagement tends to re-open discussions around why a learner might be in a position where they are needing to formally engage in ancestral language learning, as opposed to learning through intergenerational transmission. Within this process of enquiry, we start to reflect on our own whānau histories of language loss, which are inextricably intertwined with the violent dispossession from our iwi lands, resources, and cultural identities.
Language dispossession directed at Māori was an intentional process of colonization. State-sanctioned child abuse in schools imposed upon Māori children for speaking our language, even when they knew no other, aided the speed of dispossession (see Waitangi Tribunal reports, WAI 11, WAI 262).
Why is it important for us to know about the impacts of colonization on te reo Māori? Partly, because it impacts on indigenous lives.
Read more: Phys.org
A study in patients with epilepsy is helping researchers understand how the brain manages the task of learning a new language while retaining our mother tongue. The study, by neuroscientists at UC San Francisco, sheds light on the age-old question of why it’s so difficult to learn a second language as an adult.
The somewhat surprising results gave the team a window into how the brain navigates the tradeoff between neuroplasticity — the ability to grow new connections between neurons when learning new things — and stability, which allows us to maintain the integrated networks of things we’ve already learned. The findings appear in the Aug. 30 issue of Proceedings of the National Academy of Sciences.
“When learning a new language, our brains are somehow accommodating both of these forces as they’re competing against each other,” said Matt Leonard, PhD, assistant professor of neurological surgery and a member of the UCSF Weill Institute for Neurosciences.
By using electrodes on the surface of the brain to follow high-resolution neural signals, the team found that clusters of neurons scattered throughout the speech cortex appear to fine-tune themselves as a listener gains familiarity with foreign sounds.
“These are our first insights into what’s changing in the brain between first hearing the sounds of a foreign language and being able to recognize them,” said Leonard, who is a principal investigator on the study.
Read more: University of California San Francisco
New research is teasing out more of the profoundly miraculous process of language learning in babies. And it turns out that even more is going on prenatally than previously suspected.
By looking at international adoptees — babies who were adopted soon after birth and who grow up hearing a different language than what they heard in the womb — researchers can see how what babies hear before and soon after birth affects how they perceive sounds, giving new meaning to the idea of a “birth language.”
Experts have known for some time that newborns prefer to listen to voices speaking the language that they’ve been listening to in the womb, said Anne Cutler, a psycholinguist who is a professor at the Marcs Institute for Brain, Behaviour and Development at Western Sydney University, in Australia.
Newborns can recognize the voices they’ve been hearing for the last trimester in the womb, especially the sounds that come from their mothers, and prefer those voices to the voices of strangers. They also prefer other languages with similar rhythms, rather than languages with very different rhythms. (Newborns indicated their preferences by how long they sucked on specially rigged pacifiers that enabled them to hear one speaker versus another, or one language versus another.)
Read more: NY Times
People with synesthesia experience the sensory world in a unique way — for example, they “taste” words or “hear” colors. Now, new research suggests that people who learn a second language but aren’t exposed to that second language very early in life are more likely to have this sensory-switching ability than those who are natively bilingual.
“Groups of people with different linguistic backgrounds have different rates of synesthesia — and quite different rates,” said study co-author Marcus Watson, an experimental psychologist at York University in Toronto. “It ranges from 0 percent to about 5 percent depending on what their language background is.”
The findings bolster a theory that synesthesia — the bizarre brain phenomenon in which one sensory or cognitive experience is automatically triggered by another — may develop to improve learning in complicated, ruled-based tasks such as mastering reading, music theory and time telling.
Read more: Live Science
Cherokee has been one of a number of endangered Native American languages to see a renaissance in recent history. A group of University of Kansas researchers has co-authored a study demonstrating that the ways children learn and speak the language in a Cherokee immersion school are an ongoing process of renewal rather than a return to an idealized notion of “speakerhood.”
Researchers gathered data on students’ Cherokee in oral, listening, reading and writing skills at Tsalagi Dideloquasdi, a Cherokee language immersion school in Tahlequah, Oklahoma, that is a core part of Cherokee Nation’s revitalization efforts. They found the school to be a “quintessential translanguaging space,” in which students’ competencies are formed by the students, teachers, parents and members of the community, as well as the historical fluidity of Cherokee-English bilingualism. In other words, as their language skills develop, the students communicate in an innovative hybrid form of Cherokee rather than adhering to rigid language rules.
“We’ve looked meticulously at how they’re piecing together this complex morphology of the Cherokee language, which is very different from English,” said Lizette Peter, associate professor in the Department of Curriculum and Teaching. “We view these students as language boundary crossers. They don’t see English and Cherokee as two distinct, separate languages. They’re creating linguistic possibilities never before seen in the acquisition of Cherokee.”
Read more: Phys.org
A conference held last month, called the Australian Conference for Computers in Education, unveiled research into the impact of humanoid robots on students’ computational thinking.
The aim of the study was to understand the impact of humanoid NAO robots on student learning, the integration of the robots into the curriculum and the pedagogical approaches that enhance and extend student learning.
NAO robots, developed by Aldebaran Robotics, a French robotics company, have been used for research and education purposes in schools and universities worldwide. As of 2015, over 5,000 NAO robots are in use in over 50 countries.
One of these robots, called ‘Pink’, is part of a collaborative research project between the University of Queensland, the Queensland University of Technology, Swinburne University in Melbourne and the Association of Independent Schools of South Australia (AISSA).
The students and teachers at Maitland Lutheran School have been using Pink to embed the language of the traditional owners of the land – the Narungga people, into the school’s new Digital Technologies subject. About 23% of the school’s students are Aboriginal.
Read more: The Educator
It was almost a century ago that a young Lakota Indian, James Emery, was sent away from his home on South Dakota’s Rosebud Indian reservation to boarding school in the central state of Kansas. When he returned several years later, he was alarmed to notice changes in the way his language was being spoken. Folks had begun to shorten words, a phenomenon that linguists call “clipping.”
“Once you start clipping words, soon it becomes commonplace,” said his grandson Randy Emery, a cultural resources management specialist in Rosebud. “And every time that expression is used, you lose a little bit more of the language.”
Fearful that the Lakota language would someday be lost forever, the elder Emery began recording native speakers whenever he got the chance.
“He started recording on old Edison cylinders,” said his grandson, “then he went to reel-to-reel and eventually cassette tapes.”
By the end of his life in 1977, the elder Emery had amassed hundreds of recordings of some prominent Lakota, including survivors of the historic battle of Little Big Horn. Today, those recordings are housed in South Dakota’s Black Hills State University and serve as a valuable resource for language preservationists.
Read more: Voice of America
The young teachers sparkle with energy in the classroom, but they also feel fear, guilt and disapproval as they teach the ancient Native language of Inupiaq to students on Alaska’s North Slope. Because they aren’t fluent themselves.
Teachers who spoke Inupiaq as a first language entered the classroom in the 1980s and now most have retired. With hardly any fluent speakers left under 50 years old, the North Slope Borough School District started hiring young teachers who were learning the language themselves.
“In my estimation, these are the bravest, most courageous people on Earth,” said Pausauraq Jana Harcharek, the district’s director of Inupiaq education. “With our first learner-teacher, the feedback we got from the community instantly was, ‘What are you doing hiring a non-speaker in the classroom to teach the language?’”
Read more: Alaska Dispatch News
Walk into the Wuneechanunk Preschool on a typical weekday morning and you’ll be greeted by the smell of burning sage and words unheard anywhere else in the world: Children singing in the Shinnecock language.
Yes, this is the only Shinnecock reservation, and it’s a small one, about 650 people. But the reason the sound of Shinnecock being spoken is so unusual is that there are no fluent speakers of Shinnecock left — haven’t been for more than a century. With New York City only an hour and a half drive west, the pressure to assimilate has always been intense for the Native Americans of Long Island. That’s the topic of this week’s World in Words podcast.
Teacher Chenoa Curry with Wuneechanunk Shinnecock preschooler Jaycen King. Credit: Courtesy of the Wuneechanunk Shinnecock Preschool “It didn’t seem like a reasonable thing to spend their time and effort on for their children if it wasn’t going to be helpful for their future,” says Tina Tarrant, the tribe’s language researcher. “And people don’t imagine that your language is going to disappear entirely. That’s, like, such a strange concept that people don’t think of it.”
Read more: PRI
Life is hard for immigrant children – new place, new friends, new language. Starting from next January, 4 and 5-year-olds in four cities across Europe will be getting a helping hand, by testing out robot tutors to help them get up to speed in the local language.
The project, called L2TOR, is run by linguists and roboticists from a consortium of universities across Europe. It aims to help young children gain the language skills they need when they enter the school system. The pilot will see children working with robots to boost their Dutch in the cities of Tilburg and Utrecht and German in Bielefeld, while kids in Istanbul in Turkey will get help with English.
L2TOR will have children work through a language course on a tablet computer under the watchful eye of a NAO robot. These bots are made by French firm Aldebaran Robotics and are often used in classrooms. Before starting the lesson, the robot will explain what the child is going to learn, then, once the lesson is under way, it will observe the child’s body language and assist them when they get stuck.
“We want to help these children improve their language skills through one-to-one interaction with a robot, to help them catch up,” says Paul Vogt of Tilburg University in the Netherlands, who works on L2TOR.
Robots have some advantages that humans do not – like infinite patience
Read more: New Scientist
If you’re anything like me, you will have triple-ringed the new year as the time to finally start learning that new language. You’ll have acquired a dictionary, a phrasebook and some Post-its, and you may have added 500 Essential Verbs in … to your Christmas list for good measure.
However, unless you want to learn one of the more “popular” languages such as Italian, Spanish, German, Portuguese, and – increasingly – Japanese, Russian, Mandarin, Greek or Polish, you’ll be hard pressed to find an equally varied and adequate set of learning aids that relate to your language of choice. Under-resourced languages such as minority languages (spoken by a minority of the population of a territory, such as Romansh in Switzerland, Moldovan in Ukraine and Galician in Spain), endangered languages (like Ainu, a Japanese dialect considered one of the world’s rarest and most threatened) and languages that simply fail to generate enough interest to merit adequate learning tools, may initially seem like a huge task to take on.
This should not be a reason to give up on your goal, though. From thinking critically about your reasons for learning, to scouting out websites that suit your preferred learning style, here are some ideas than might help you along the way.
Read more: The Guardian
Helping a tiny baby to learn your language is like building a bonfire with words for twigs. Nothing happens for ages. You keep putting the bloody twigs on and trudging back and forth in a cold, damp field. You may have a faulty pelvic floor and much rather be watching something on the telly with a towel under your bum, but bonfires don’t build themselves, do they?
But there’s a problem. No matter how many words you pile on, nothing catches. At first, you try to build it properly, sentence by sentence, with full stops and proper pauses, but by the end, you’re just flinging random words on top of each other, sweating and slightly mad. You stand back. It’s taken more than a year or longer. You now have a huge pile of impressive but slightly useless wood. You try singing nursery rhymes to it, but it stares blankly back before doing a poo and crying.
You give up and are about to put the kettle on. Then you hear a roar and a crackle behind you. The fire has caught. Everything you piled on that bonfire, even the words you thought didn’t go in, is playing its part, burning brightly with the sheer exuberance of language. You stand back to bask in the heat and the magic and the wildness of the flames, rubbing your hands and telling all your neighbours: “Yep, I built that. Oh, it was nothing. Just love and patience, really.”
From that moment, the fire burns for ever.
Read more: The Guardian