In the early stages of object vision, the brain detects 3D shape fragments (bumps, hollows, shafts, spheres) — a newly discovered natural intelligence strategy that researchers from Johns Hopkins University have also found in artificial intelligence networks trained to recognize visual objects.
A new paper in Current Biology details how 3D shape fragments are represented by neurons in area V4, the first stage specific to the brain’s object vision pathway, not just the 2D shapes used for the last 40 years to study V4. In an early stage (layer 3) of AlexNet, an advanced computer vision network, the Johns Hopkins scientists then recognized almost identical responses of artificial neurons. Early detection of 3D structure presumably helps perception of solid, 3D structures in the real world in both natural and artificial vision.
“As early as V4, I was shocked to see solid, transparent 3D shape signals,” said Ed Connor, a professor of neuroscience and director of the Zanvyl Krieger Mind / Brain Institute.
“As early as V4, I was shocked to see solid, transparent 3D shape signals,” said Ed Connor, a professor of neuroscience and director of the Zanvyl Krieger Mind / Brain Institute. “But in a million years, I would never have guessed you would see the same thing happening in AlexNet, which is only qualified to convert 2D images into labels of objects.”
Replicating human vision has been one of the long-standing problems of artificial intelligence. Based on high capacity Graphical Processing Units ( GPU) built for gaming and vast training sets fed by the explosion of images and videos on the Internet, deep (multilayer) networks such as AlexNet have achieved significant gains in object recognition.
The same studies of image responses to natural and artificial neurons were applied by Connor and his team and found strikingly similar response patterns in V4 and AlexNet layer 3. What explains what Connor defines as a “spooky communication” between computer scientists built and trained to mark object photographs between the brain — a result of evolution and lifelong learning — and AlexNet?
In reality, Connor said, AlexNet and similar deep networks were partially developed based on the multi-stage visual networks in the brain. He said the near similarities that they found may point to potential possibilities for exploiting natural and artificial intelligence correlations.
“The most promising existing models for understanding the brain are neural networks. Conversely, the brain is the greatest source of techniques for getting natural intelligence closer to artificial intelligence,” Connor said.
Source of Story: Materials given by the University of Johns Hopkins Note: For style and length, material can be edited.
No CommentsLeave a comment Cancel