Unveiling Similarities: How Human and Artificial Neural Networks Approach Language Learning
Paragraph 1
There’s been a long-standing debate as to whether neural networks learn in the same way that humans do. Now, a study published last month suggests that natural and artificial neural networks (ANN) learn in similar ways, at least when it comes to language. The researchers – led by Gašper Beguš, a computational linguist at the University of California, Berkeley – compared the brain waves of humans listening to a simple sound to the signal produced by a neural network analyzing the same sound. The results were uncannily alike…The results not only help demystify how ANNs learn, but also suggest that human brains may not come already equipped with hardware and software specially designed for language.
Paragraph 2
To establish a baseline for the human side of the comparison, the researchers played a single syllable – “bah” – repeatedly in two eight-minute blocks for 14 English speakers and 15 Spanish speakers. While it played, the researchers recorded fluctuations in the average electrical activity of neurons in each listener’s brainstem – the part of the brain where sounds are first processed.
Paragraph 3
In addition, the researchers fed the same “bah” sounds to two different sets of neural networks – one trained on English sounds, the other on Spanish. The researchers then recorded the processing activity of the neural network, focusing on the artificial neurons in the layer of the network where sounds are first analyzed (to mirror the brainstem readings). It was these signals that closely matched the human brain waves.
Paragraph 4
The researchers chose a kind of neural network architecture known as a generative adversarial network (GAN). A GAN is composed of two neural networks – a discriminator and a generator. The generator creates a sample, which could be an image or a sound. The discriminator determines how close it is to a training sample and offers feedback, resulting in another try from the generator, and so on until the GAN can deliver the desired output.
Paragraph 5
In this study, the discriminator was initially trained on a collection of either English or Spanish sounds. Then the generator had to find a way of producing them. As a result of this training, the discriminator also got better at distinguishing between real and generated ones. It was at this point, after the discriminator was fully trained, that the researchers played it the “bah” sounds. The team measured the fluctuations in the average activity levels of the discriminator’s artificial neurons, which produced the signal so similar to the human brain waves. This likeness between human and machine activity levels suggested that the two systems are engaging in similar activities. Just as research has shown that feedback from caregivers shapes infant productions of sounds, feedback from the discriminator network shapes the sound productions of the generator network.
Paragraph 6
The experiment also revealed another interesting parallel between humans and machines. The brain waves showed that the English- and Spanish-speaking participants heard the “bah” sound differently… The brainstem of English speakers responds to the “bah” sound slightly earlier than the brainstem of Spanish speakers, and the GAN trained in English responded to that same sound slightly earlier than the Spanish-trained model. This provided additional evidence, Beguš said, that humans and artificial networks are “likely processing things in a similar fashion.”
CAT Verbal Online Course