Here’s a follow-on to my column about evolving neural networks. That one involved deception, this one involves distractiion. That one involved robots, this one involves babies.
About fifty years ago, logician and child psychologist Jean Piaget designed what has become a classic experiment to test the memory and learning of babies. It is a game of hiding and finding, and through it we have discovered that when infants up to 10 months old are repeatedly shown an toy being hidden in a certain place, they continue to look for the toy there, even when they have also seen it hidden in some other place. By the age of one year, however, they get it. They figure out they can look in more than one place.
This experiment has been used for decades by scientists interested in human development, and most recently has led to a finding by a Hungarian team that the ability of young infants to read social cues actually misleads them and causes them to perform worse in the hide-and-seek game. Older babies, however, were still able to read through the deception.
Now scientists at the University of Iowa have used a neural network model to prove that the problem is not the infants’ mistaken reading of a social cue but simply distraction claiming the attention of the young infants and thereby disrupting their memory of the actual hiding of the toy.
How, you are wondering, did the UI team verify an internal cognitive process with neural network? Their neural net trained on the responses of the infants in many different versions of the hiding game, and the team then programmed an interruption in the flow of computation so that the computer stopped "paying attention" to the hiding event. Then it too flunked the memory test.
What is the take-home message from this? That you can fool all of the babies only part of their lives? Or that an neural network is never too old to be immature?