Compare deep networks with the brain: Can they as well as humans ‘see’? | India News

BENGALURU: A new study by the IISc’s Center for Neuroscience (SSS) investigated how deep neural networks – machine learning systems inspired by the network of brain cells or neurons in the human brain – can be compared to the human brain in terms of visual perception.
They point out that deep neural networks can be trained to perform specific tasks, and researchers say they have played an important role in helping scientists understand how our brains perceive the things we see.
‘Although deep networks have developed significantly over the past decade, they are still not nearly as good as the human brain in observing visual cues. In a recent study, SP Arun, associate professor at CNS, and his team compared different qualitative characteristics of these deep networks with those of the human brain, ”IISc said in a statement.
Although deep networks are a good model for understanding how the human brain visualizes objects, they work differently than the latter, IISc said, and although complex computations are insignificant to them, certain tasks that are relatively easy for humans can be difficult. for these networks. finish.
‘In the present study, published in Nature communication, Arun and his team tried to understand what visual tasks can be performed by these networks naturally based on their architecture and what further training is required. The team studied 13 different perceptual effects and discovered previously unknown qualitative differences between deep networks and the human brain, ”reads the statement.
An example, according to IISc, was the Thatcher effect – a phenomenon where people find it easier to recognize changes in local features in an upright image, but it becomes difficult when the image is turned upside down.
Deep networks trained to recognize upright faces showed a Thatcher effect compared to networks trained to recognize objects. Another visual feature of the human brain, called mirror confusion, has been tested on these networks. To humans, mirror reflections along the vertical axis look more like those along the horizontal axis. The researchers found that deep networks also perform stronger mirror confusion for vertically reflected images compared to horizontally reflected images.
Another phenomenon peculiar to the human brain is that it first focuses on coarser details. This is known as the global benefit effect. In an image of a tree, for example, our brain would first see the tree as a whole before noticing the details of the leaves in it, ‘explains Georgin Jacob, first author and PhD student at the CNS.
Surprisingly, he said, neural networks have shown a local advantage. This means that the networks, unlike the brain, first focus on the finer details of an image. Therefore, although these neural networks and the human brain perform the same object recognition tasks, the steps followed by the two are very different.
Arun, the senior author of the study, says that identifying these differences could push researchers closer to making these networks broader. Such analyzes can help researchers build robust neural networks that not only perform better but are also immune to ‘conflicting attacks’ aimed at derailing them.

.Source