MIT Creates Machine-Learning System that Listens Like a Human

Leave it to MIT to design a computer that can hear like a human. What does that mean, exactly? Well, in this case, it means the machine can replicate auditory tasks such as identifying a musical genre. Thanks to deep neural networks, researchers say this very thing is now possible. "This model, which consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks, was used by the researchers to shed light on how the human brain may be performing the same tasks," according to MIT News.

The researchers at MIT "trained their neural network to perform two auditory tasks, one involving speech and the other involving music. For the speech task, the researchers gave the model thousands of two-second recordings of a person talking. The task was to identify the word in the middle of the clip. For the music task, the model was asked to identify the genre of a two-second clip of music. Each clip also included background noise to make the task more realistic (and more difficult)." It took thousands of trials, but eventually, the computer learned to perform these tasks as well as humans. Interestingly, the machines often made mistakes on the same clips that tripped up actual human listeners. 

SpeechTek Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues