New ‘Liquid’ AI is constantly learning from his experience of the world

Despite all the comparisons to the human brain, AI is still not much like us. Maybe it’s good. In the animal kingdom, brains come in all shapes and sizes. In a new approach to machine learning, engineers have done away with the human brain and all its beautiful intricacies – and instead turned to the brain of a humble worm for inspiration.

Simplicity seems to have its benefits. The resulting neural network is efficient, transparent and here’s the kicker: it’s a lifelong learner.

While most machine learning algorithms may not hone their skills beyond an initial training period, the researchers say the new approach, a fluid neural network, contains a kind of built-in ‘neuroplasticity’. That is, as it works in the future, for example in a car driver or with a robot, it can learn from experience and adjust its connections immediately.

In a noisy and chaotic world, such adaptability is essential.

Worm Brain Manager

The architecture of the algorithm is inspired by the only 302 neurons that make up the nervous system of C. elegans, a small nematode (or worm).

In the work published last year, the group, which includes researchers from MIT and the Austrian Institute of Science and Technology, said that despite its simplicity, C. elegans can have surprisingly interesting and varied behavior. Thus, they developed equations to mathematically model the neurons of the worm and then constructed them into a neural network.

Their worm-brain algorithm was much simpler than other leading machine learning algorithms, and yet it could still perform similar tasks, such as keeping a car in its lane.

“Today, deep learning models with many millions of parameters are often used for learning complex tasks such as autonomous management,” said Mathias Lechner, a PhD student at the Austrian Institute of Science and Technology and study author. “However, our new approach enables us to reduce the size of the networks by two orders of magnitude. Our systems use only 75,000 traceable parameters. ”

Now, in a new paper, the group is taking their worm-inspired system further by adding a whole new capability.

Old worm, new tricks

The output of a neural network – for example, turning the steering wheel to the right – depends on a set of weighted connections between the ‘neurons’ of the network.

In our brains it is the same. Each brain cell is connected to many other cells. Whether a particular cell shoots or not depends on the sum of the signals it receives. Outside of a threshold – or weight – the cell sends a signal to its own network of power connections.

In a neural network, these weights are called parameters. As the system feeds data through the network, the parameters coincide on the configuration that produces the best results.

Usually the parameters of a neural network are locked after the training and the algorithm is set in motion. But in the real world, it can be a bit fragile – an algorithm shows something that deviates too much from its training, and it will break. Not an ideal result.

In contrast, the parameters in a fluid neural network are allowed to change over time and with experience. The AI ​​learns at work.

This adaptability means that the algorithm is less likely to break because the world throws new or noisy information in its own way, such as when rain obscures the camera of an autonomous car. Unlike larger algorithms, the inner workings of which are largely unexplored, the simple architecture of the algorithm allows researchers to look at and examine its decision-making.

Its new capability or its still small format does not seem to be holding back the AI. The algorithm performed as well or better than other modern time series algorithms to predict the next steps in a series of events.

“Everyone is talking about scaling up their network,” said Ramin Hasani, lead author of the study. “We want to scale down, have fewer but richer nodes.”

A flexible algorithm that consumes relatively little computing power would be an ideal robotic brain. Hasani believes the approach may be useful in other applications that involve real-time analysis of new data such as video processing or financial analysis.

He plans to continue with the approach of making it practical.

‘We have a more expressive neural network inspired by nature. But this is only the beginning of the process, “said Hasani. ‘The obvious question is how do you expand it? We think this kind of network could be an important element of future intelligence systems. ”

Is bigger better?

At a time when big players like OpenAI and Google are regularly making headlines with giant machine learning algorithms, this is a compelling example of an alternative approach in the opposite direction.

OpenAI’s GPT-3 algorithm dropped jaws last year, both because of its size – at the time a record 175 billion parameters – and its capabilities. A recent Google algorithm was more than a trillion parameters at the top of the charts.

Nevertheless, critics are concerned that the pursuit of ever-increasing AI is wasteful, costly and it consolidates research into the hands of a few companies with cash to fund large-scale models. Furthermore, these large models are ‘black boxes’, and their actions are largely impenetrable. This can be especially problematic if models are trained unattended on the unfiltered internet. They are not told (or perhaps controlled) what bad habits they are picking up.

Academic researchers aim to increasingly address some of these issues. Since companies like OpenAI, Google, and Microsoft are the hypothesis of better-better-better-proof, it is possible that serious AI innovations will emerge in efficiency elsewhere – not despite a lack of resources, but because thereof. As they say, necessity is the mother of the invention.

Image Credit: Benjamin Henon / Unsplash

Source