Perfecting the perception of the pitch | MIT News

New research from neuroscientists at MIT suggests that natural soundscapes have shaped our sense of hearing, optimizing it for the types of sounds we encounter most often.

In a study published on December 14 in the journal Nature Communication, researchers led by Josh McDermott, associate researcher at the McGovern Institute for Brain Research, used computer modeling to explore the factors that influence how humans hear tone. Their model’s height perception closely resembled that of humans, but only when trained using music, voices, or other naturalistic sounds.

The ability of humans to recognize pitch – in essence, the speed at which a sound repeats itself – gives music melody and nuance to spoken language. Although this is arguably the best-studied aspect of human hearing, researchers are still debating the factors that determine the properties of pitch perception and why it is more acute for certain types of sound. that others. McDermott, who is also an associate professor in the Department of Brain and Cognitive Sciences at MIT and a researcher at the Center for Brains, Minds, and Machines (CBMM) at MIT, is particularly interested in understanding how our nervous system perceives the pitch because cochlear implants, which send electrical signals over sound to the brain in people with profound deafness, do not reproduce this aspect of human hearing very well.

“Cochlear implants can help people understand speech very well, especially if they are in a quiet environment. But they really don’t reproduce the perception of the pitch very well, ”says Mark Saddler, a graduate student and CBMM researcher who co-led the project and the first graduate fellow from the K. Lisa Yang Integrative Computational Neuroscience Center. “One of the reasons it’s important to understand the detailed basis of tone perception in people with normal hearing is to try and get a better idea of ​​how we might replicate this artificially in a prosthesis. . “

Artificial hearing

The perception of tone begins in the cochlea, the snail-shaped structure in the inner ear where vibrations of sounds are transformed into electrical signals and transmitted to the brain via the auditory nerve. The structure and function of the cochlea helps determine how and what we hear. And although it was not possible to test this idea experimentally, McDermott’s team suspected that our “hearing regime” might shape our hearing as well.

To explore how our ears and our environment influence pitch perception, McDermott, Saddler, and research assistant Ray Gonzalez built a computer model called a deep neural network. Neural networks are a type of machine learning model widely used in automatic speech recognition and other artificial intelligence applications. Although the structure of an artificial neural network roughly resembles the connectivity of neurons in the brain, models used in engineering applications do not hear the same as humans. The team therefore developed a new model to reproduce the human perception of height. Their approach combined an artificial neural network with an existing mammalian ear model, uniting the power of machine learning with knowledge of biology. “These new machine learning models are truly the first that can be trained to perform complex auditory tasks and actually do them well, at human performance levels,” says Saddler.

The researchers trained the neural network to estimate pitch by asking it to identify the repetition rate of sounds in a training set. This gave them the ability to change the parameters under which height perception developed. They could manipulate the types of sounds they presented to the model, as well as the properties of the ear that processed those sounds before transmitting them to the neural network.

When the model was trained using sounds important to humans, such as speech and music, she learned to estimate pitch in the same way humans do. “We reproduced many features of human perception very well… suggesting that it uses similar cues from sounds and cochlear representation to perform the task,” says Saddler.

But when the model was trained using more artificial sounds or in the absence of any background noise, it behaved very differently. For example, Saddler says, “If you optimize for this idealized world where there are never competing noise sources, you can learn a pitch strategy that appears to be very different from humans, which suggests that maybe – being the human pitch system was really optimized to deal with cases where sometimes noise obscures parts of the sound.

The team also found that the synchronization of nerve signals initiated in the cochlea was essential for the perception of tone. In a healthy cochlea, McDermott explains, nerve cells go off precisely at the same time as sound vibrations that reach the inner ear. When the researchers skewed this relationship in their model so that the timing of nerve signals was less closely correlated with vibrations produced by incoming sounds, height perception shifted away from normal human hearing.

McDermott says it will be important to take this into account as researchers work to develop better cochlear implants. “This strongly suggests that for cochlear implants to produce normal timbre perception, there must be a way to reproduce the fine-grained timing information in the auditory nerve,” he says. “At the moment, they don’t, and there are technical challenges in getting there – but the modeling results very clearly suggest that is what you need to do.”


Source link

Comments are closed.