Autism and auditory processing disorder: A follow-up

By Sophie Schwartz, Ph.D. September 17, 2020
Sophie Schwartz, Ph.D.

By Sophie Schwartz, Ph.D., post-doctoral research fellow at Boston University’s Center for Autism Research Excellence, directed by Dr. Helen Tager-Flubserg. Dr. Schwartz received a predoctoral fellowship in 2016 funded by Royal Arch Masons International and Autism Speaks.  

About CAPD: Central auditory processing disorder, or difficulty processing sounds, is common in people with autism.

Key Findings:

  • People with autism and more severe language deficits listening to names in a noisy background, including other people talking, did not produce the same early brain responses as shown on brain scans as neurotypical participants, when differentiating the sound of one’s own name from another person’s name.
  • Participants with autism but with no or only minor language deficits produced early brain responses similar to neurotypical participants.

What It Means:

  • May indicate that certain people who haven’t learned language as expected by adolescence may have trouble detecting differences between speech sounds – a skill that’s fundamental for language acquisition.

What is central auditory processing disorder?

Difficulties processing sounds, often described as central auditory processing disorder or auditory processing disorder (CAPD/APD), are particularly common in people with autism. The reported estimates of the incidence of CAPD in people with autism vary widely by report, in part because there is no “gold-standard” or official way of measuring these disorders. However, in several large research studies, roughly 65 percent of parents describe their child with autism as showing sensitivity to noise, while smaller studies report that up to 93 percent of people with autism display atypical responses to sounds, including problems filtering sounds.

In May 2016, Autism Speaks and Royal Arch Masons International announced funding for new fellowships to support research by young investigators to better understand central auditory processing disorders in people with and without autism. I was selected to receive one of these fellowships.

Several publications resulted from this fellowship, the most recent of which was published online in August 2020 in the journal Autism Research. It offers an update on what we’ve learned about the intersection of autism and CAPD from brain imaging research and proposes future steps that are needed to uncover even more answers.

About auditory processing in autism

Atypical reaction to sensory input, including sound, is part of one of the core criteria for diagnosing autism. Parent- and self-reports collected in multiple studies support the importance of this criteria. They describe a person with autism as having atypical perception and response to sounds, such as feeling overwhelmed by noisy environments or frequently covering one’s ears even when no abrasive noise is present. Effective auditory processing requires the ability to differentiate certain sounds from others and the ability to amplify important sounds while ignoring unimportant ones. These skills are extremely important when paying attention to, understanding and remembering spoken information, especially in noisy environments.

Past research has tended to focus on group-level analyses of auditory processing (i.e., comparing people with autism to neurotypical people) but has not looked closely enough at individual differences among people on the autism spectrum. We designed our research to identify whether-subgroups within the autism spectrum were more likely to show signs of a disrupted auditory processing system. In particular, we hypothesized that these challenges were more likely in those who, by adolescence, still had not acquired more than minimal spoken language skills.

Our current study

To expand our understanding of the brain activity patterns associated with both sound sensitivity and difficulty with language in children and young adults on the autism

spectrum, our team at Boston University’s Center for Autism Research Excellence designed a research study that could be implemented with a wide range of people with autism. We considered how people with autism not only respond to their name, but how they respond to their name in a context that requires them to filter out other people talking.

To capture their response, we used a technology called electroencephalography (EEG). With EEG, we can place a cap with small sensors on participants’ head and record their brain’s electrical activity. Using this brain imaging technology, we could capture information reflecting how a person perceived speech, directly from their brain activity, without requiring that the participant understand complex instructions or produce language.

We monitored the brain’s response to particularly meaningful sounds – ispecifically, a recording of the participant’s own name, versus unfamiliar names that would not carry the same degree of personal meaning. We also focused on a situation in which names were heard in the context of a noisy background with other people talking.

Important lessons from studying the most understudied ASD group

Studying auditory processing in those with autism, let alone those with severe language and sensory issues, is challenging. But, one of the most important lessons from this study was that these challenges are not insurmountable.

Almost 50 people diagnosed with autism sat for over 45 minutes while wearing an EEG cap to participate in our study. Participants learned to feel comfortable wearing the cap by practicing at home and in the lab with a practice EEG cap. Many times, we would begin by just having participants allow the cap to touch their head, and gradually progress to having them wear the cap for longer intervals, from 10 seconds to five minutes.

For some parents, the idea of their child sitting still for neuroimaging seemed unlikely. But with practice, it was often possible.

While this sometimes required hours more work, the results were worth it. We cannot continue to avoid researching the brains of minimally verbal people with autism – they and their families deserve information, too.

Results and new directions

In my post introducing this research in 2018, I wrote that we hoped to fill gaps in our knowledge of how people with autism perceive and process sounds, especially speech. We hoped to learn more about who is processing sounds in atypical ways and who is likely to benefit from interventions that target sound processing and language.

We found that when participants were listening to names in a noisy background with other people talking, those with autism with more severe language deficits did not produce the same early brain responses as the neurotypical participants, when differentiating the sound of one’s own name from another person’s name.

In contrast, participants with autism but with no or only minor language deficits produced early brain responses similar to neurotypical participants. These early brain responses are often considered to be indicative of low-level speech detection – think, hearing the first letter of your name (“J” in John) and noticing it as different from other names that don’t start with a “J.”

Our results provide evidence for the hypothesis that certain people who haven’t learned language as expected by adolescence have trouble detecting differences between speech sounds – a skill that’s fundamental for language acquisition.

In addition, we looked at a late brain response classically shown to look different when people hear their own name versus another person’s name, or even more generally think about themselves in comparison to another person.

We found that the degree to which this late brain response looked neurotypical significantly correlated with that person’s ability to filter important from unimportant sounds, as measured by parent perceptions of their child’s abilities and actions in the presence of different sounds.

These findings provide evidence for the hypothesis that some people with autism struggle with paying attention to important speech sounds like their own names and that this may be related to difficulty selecting important speech while filtering out unimportant sounds or information.

To learn more:

Additional Resources & Tools