New research indicates that the entire universe can be a giant neural network

The core idea is deceptively simple: every observable phenomenon in the entire universe can be modeled on a neural network. And this means that the universe itself can be a neural network.

Vitaly Vanchurin, a professor of physics at the University of Minnesota Duluth, last August published an incredible article titled ‘The World as a Neural Network’ on the arXiv pre-print server. It succeeds in moving beyond our notice to this day when Victor Tangermann of Futurism published an interview with Vanchurin to discuss the newspaper.

The big idea

According to the article:

We discuss the possibility that the whole universe at its most fundamental level is a neural network. We identify two different types of dynamic degrees of freedom: ‘detectable’ variables (eg bias vector or weight matrix) and ‘hidden’ variables (eg state vector of neurons).

Vanchurin’s work is in the most basic way to explain away the gap between quantum and classical physics. We know that quantum physics has an excellent job of explaining on a very small scale what is going on in the universe. For example, if we are dealing with individual photons, we can deal with quantum mechanics on an observable, repeatable, measurable scale.

But as we begin to expand, we must use classical physics to describe what is happening, because we lose the thread as we make the transition from observable quantum phenomena to classical observations.

The argument

The root problem with coming up with a theory of everything – in this case one that defines the nature of the universe itself – is that it usually eventually replaces one proxy-for-god with another. Where theorists have stated everything, from a divine creator to the idea that we all live in a computer simulation, the two most enduring explanations for our universe are based on clear interpretations of quantum mechanics. These are called the “many worlds” and “hidden variables” interpretations, and these are the ones that Vanchurin tries to reconcile with his theory of ‘world as a neural network’.

For this purpose, Vanchurin concludes:

In this paper, we have discussed the possibility that the entire universe at its most fundamental level is a neural network. This is a very risky claim. Not only are we saying that artificial neural networks can be useful for analyzing physical systems or discovering physical laws, we are saying that this is how the world around us works. In this respect, it can be seen as a proposition for the theory of everything, and it should therefore be easy to prove it wrong. All that is needed is to find a physical phenomenon that cannot be described by neural networks. Unfortunately (or fortunately) this is easier said than done.

Take fast: Vanchurin specifically says that he adds nothing to the interpretation of ‘many worlds’, but this is where the most interesting philosophical implications lie (according to the author’s humble opinion).

If the work of Vanchurin expands in peer review, or at least leads to a greater scientific assertion of the idea of ​​the universe as a fully functioning neural network, we will find a thread to attract what we have on the way to a successful theory of everything.

If we are all nodes in a neural network, what is the purpose of the network? Is the universe one giant, closed network, or is it a single layer in a larger network? Or maybe we’re just one of the trillions of other universes connected to the same network. When we train our neural networks, we run thousands or millions of cycles until the AI ​​is properly ‘trained’. Are we just one of an innumerable number of training cycles for a larger purpose than a larger machine?

You can read the whole paper here on arXiv.

Published on March 2, 2021 – 19:18 UTC

Source