New MIT brain research shows how AI can help us understand our consciousness

A team of researchers from MIT and Massachusetts General Hospital recently published a study linking social awareness to individual neuronal activity. To the best of our knowledge, this is the first time that evidence for the ‘theory of the mind’ has been identified on this scale.

Measuring large groups of neurons is the bread-and-butter of neurology. Even a simple MRI can specifically highlight regions of the brain and give scientists an indication of what they are used for, and in many cases what kind of thoughts are happening. But figuring out what’s going on at the single neuron level is a whole other thing.

According to the article:

Here, using single cell recordings in the human dorsomedial prefrontal cortex, we identify neurons that reliably encode information about others’ beliefs about richly different scenarios and that distinguish themselves from representations related to other beliefs … reveal these findings a detailed cellular process in the human dorsomedial prefrontal cortex to represent the beliefs of others and identify candidate neurons that can support psychology.

In other words, the researchers believe that they have observed individual brain neurons that form the patterns that make us consider what other people are feeling and thinking. They identify empathy in action.

It can have a huge impact on brain research, especially in the field of mental illness and social anxiety disorders, or in the development of individualized treatments for people with autism spectrum disorder.

Perhaps, however, this is the most interesting thing we can learn about the awareness of the team’s work.

[Read: How this company leveraged AI to become the Netflix of Finland]

The researchers asked 15 patients to undergo a specific brain operation (not related to the study) to answer a few questions and undergo a simple behavioral test. According to a Massachusetts General Hospital press release:

Micro-electrode inserted into the dorsomedial prefrontal cortex recorded the behavior of individual neurons while patients listened to short stories and answered questions about them. Participants presented this scenario, for example, to evaluate how they view another’s beliefs of reality: ‘You and Tom see a jug on the table. After Tom is gone, you move the pot to a cupboard. Where does Tom believe the potty? ‘

After listening to each story, the participants had to draw conclusions about the beliefs of another. The experiment did not change the planned surgical approach or change the clinical care.

The experiment basically took a big concept (brain activity) and incorporated it as much as possible. By adding this layer of knowledge to our collective understanding of how individual neurons communicate and work together to emerge that ultimately a theory of other thoughts within our own consciousness, it may become possible to identify and quantify other neural systems in action using similar experimental techniques.

It would, of course, be impossible for scientific scientists to come up with ways to stimulate, observe and label 100 billion neurons – if for no other reason than the fact that it would take thousands of years to count them much less, see how they respond. to provocation.

Fortunately, we have entered the artificial intelligence era, and if there’s anything AI is good at, it’s really doing monotonous things, like tagging 80 billion individual neurons.

It is not much to think that the methodology of the Massachusetts team is automatic. While it appears that the current iteration requires the use of invasive sensors – hence the use of volunteers who would already be undergoing brain surgery – it is certainly within the possibility that such fine readings could one day be achieved with an external device.

The ultimate goal of such a system is to identify and map every neuron in the human brain while operating in real time. It would be like seeing a fence labyrinth of a hot air balloon after an eternity was lost in its twists and turns.

It gives us a glimpse of consciousness in action and we can possibly repeat it more accurately in machines.

Published on January 27, 2021 – 20:34 UTC

Source