A new study led by Northwestern University researchers used machine learning—a branch of artificial intelligence—to identify speech patterns in children with autism that were consistent between English and Cantonese, suggesting that features of speech might be a useful tool for diagnosing the condition.
Undertaken with collaborators in Hong Kong, the study yielded insights that could help scientists distinguish between genetic and environmental factors shaping the communication abilities of people with autism, potentially helping them learn more about the origin of the condition and develop new therapies.
Children with autism often talk more slowly than typically developing children, and exhibit other differences in pitch, intonation and rhythm. But those differences (called “prosodic differences’” by researchers) have been surprisingly difficult to characterize in a consistent, objective way, and their origins have remained unclear for decades.
However, a team of researchers led by Northwestern scientists Molly Losh and Joseph C.Y. Lau, along with Hong Kong-based collaborator Patrick Wong and his team, successfully used supervised machine learning to identify speech differences associated with autism.
The data used to train the algorithm were recordings of English- and Cantonese-speaking young people with and without autism telling their own version of the story depicted in a wordless children’s picture book called “Frog, Where Are You?”
The results were published in the journal PLOS One on June 8, 2022.
“When you have languages that are so structurally different, any similarities in speech patterns seen in autism across both languages are likely to be traits that are strongly influenced by the genetic liability to autism,” said Losh, who is the Jo Ann G. and Peter F. Dolle Professor of Learning Disabilities at Northwestern.
“But just as interesting is the variability we observed, which may point to features of speech that are more malleable, and potentially good targets for intervention.”
Lau added that the use of machine learning to identify the key elements of speech that were predictive of autism represented a significant step forward for researchers, who have been limited by English language bias in autism research and humans’ subjectivity when it came to classifying speech differences between people with autism and those without.
“Using this method, we were able to identify features of speech that can predict the diagnosis of autism,” said Lau, a postdoctoral researcher working with Losh in the Roxelyn and Richard Pepper Department of Communication Sciences and Disorders at Northwestern.
“The most prominent of those features is rhythm. We’re hopeful that this study can be the foundation for future work on autism that leverages machine learning.”
The researchers believe that their work has the potential to contribute to improved understanding of autism. Artificial intelligence has the potential to make diagnosing autism easier by helping to reduce the burden on healthcare professionals, making autism diagnosis accessible to more people, Lau said. It could also provide a tool that might one day transcend cultures, because of the computer’s ability to analyze words and sounds in a quantitative way regardless of language.
Read more: Neuroscience News