The 2010s were great for artificial intelligence, thanks to advances in deep learning, a branch of AI that has become feasible due to the growing ability to collect, store and process large amounts of data. Today, deep learning is not only a topic of scientific research, but also a key component of many everyday applications.
But a decade of research and application has made it clear that deep learning is not the ultimate solution to the ever-elusive challenge of creating AI on a human level.
What do we need to take AI to the next level? More data and bigger neural networks? New algorithms for deep learning? Approaches other than deep learning?
This is a topic that has been hotly debated in the AI community and the focus has been on an online discussion in Montreal.AI held last week. Entitled “AI Debate 2: Movement of AI: An Interdisciplinary Approach”, the debate was attended by scientists from different backgrounds and disciplines.
Hybrid artificial intelligence
Cognitive scientist Gary Marcus, who put together the debate, reiterated some of the major shortcomings of deep learning, including excessive data requirements, low ability to transfer knowledge to other domains, opacity, and a lack of reasoning and knowledge representation.
Marcus, who is an outspoken critic of deep-learning approaches, published a paper in early 2020 proposing a hybrid approach that combines learning algorithms with rule-based software.
Other speakers also pointed to hybrid artificial intelligence as a possible solution to the challenges facing deep learning.
“One of the most important questions is to identify the building blocks of AI and how AI can become more reliable, explicable and interpretable,” said computer scientist Luis Lamb.
Lamb, who is a co-author of the book Neural symbolic cognitive reasoning, proposed a basic approach to neural-symbolic AI based on logical formalization and machine learning.
‘We use logic and knowledge representation to represent the reasoning process [it] is integrated with machine learning systems so that our neural learning can also be effectively reformed through deep learning machinery, ”said Lamb.
Inspiration from evolution
Fei-fei Li, a computer science professor at Stanford University and former chief AI scientist at Google Cloud, underlined that vision in the history of evolution was one of the major catalysts for the rise of intelligence in living things. The work on image classification and computer vision has also resulted in the profound learning evolution of the past decade. Li is the creator of ImageNet, a collection of millions of images used to train and evaluate computer vision systems.
“As scientists we ask ourselves, what is the next northern star?” Li said. “There is more than one. I am extremely inspired by evolution and development. ”
Li pointed out that intelligence in humans and animals stems from active perception and interaction with the world, a trait that is very much lacking in current AI systems, relying on data compiled and labeled by humans.
‘There is a fundamental critical loop between perception and activation that drives learning, understanding, planning and reasoning. And this loop can be better realized if our AI agent can be personified, switch between exploratory and exploitative actions, multimodal, multitasking, generalized and often social, ‘she said.
In her Stanford lab, Li is currently working on building interactive agents that use perception and action to understand the world.
OpenAI researcher Ken Stanley also discussed lessons from evolution. “There are properties of evolution in nature that are just as powerful and have not yet been explained algorithmically, because we cannot create phenomena like those created in nature,” Stanley said. “These are traits we must continue to chase and understand, and these are traits not only in evolution but also in ourselves.”
Reinforcement learning
Computer scientist Richard Sutton pointed out that work on AI mostly has a ‘calculation theory’, a term coined by the neuroscientist David Marr, who is known for his visionary work. Computational theory defines what purpose an information processing system strives for and why it wants to achieve that goal.
‘In neuroscience we lack an understanding at the highest level of purpose and purpose of common sense. This is also true in artificial intelligence – perhaps more surprisingly in AI. There is very little computational learning in the sense of Marr in AI, ”Sutton said. Sutton added that textbooks often define AI as ‘getting machines to do what people do’ and that most current conversations in AI, including the debate between neural networks and symbolic systems, ‘are about how to achieve something, as if we already do’ understand what it is we are trying to do. ”
“Reinforcement learning is the first computational theory of intelligence,” Sutton said, referring to the branch of AI in which agents learn the basic rules of an environment and must find ways to maximize their reward. “Reinforcement learning is explicit about the purpose, about the what is and the why. In reinforcement learning, the goal is to maximize an arbitrary reward signal. To that end, the agent must develop a policy, a value function and a generative model, ‘Sutton said.
He added that the field needs to further develop an agreed computational theory of intelligence, saying that reinforcement theory is currently the most striking candidate, although he acknowledges that other candidates are worth investigating.
Sutton is a pioneer of reinforcement learning and author of a core textbook on the subject. DeepMind, the AI lab where he works, is deeply invested in ‘deep learning reinforcement’, a variation of the technique that integrates neural networks into basic learning techniques for reinforcement. In recent years, DeepMind has used deep reinforcement learning to master games such as Go, Chess and StarCraft 2.
While reinforcement learning has striking similarities with the learning mechanisms in the brains of humans and animals, it also suffers from the same challenges that plague deep learning. Reinforcement learning models require extensive training to learn the simplest things and are strictly limited to the narrow domain in which they are trained. Currently, the development of deep reinforcement learning models requires very expensive computer resources, which limits research in the area to the pocket files such as Google, which owns DeepMind, and Microsoft, the quasi-owner of OpenAI.
Integration of world knowledge and common sense into AI
Computer scientist and Turing Prize winner Judea Pearl, best known for her work on Bayesian networks and causal distraction, emphasized that AI systems need world-wide knowledge and common sense to use the data they feed as efficiently as possible.
“I believe we need to build systems that contain a combination of world knowledge and data,” Pearl said, adding that AI systems based solely on collecting and processing large amounts of data are doomed to fail. .
Knowledge does not come from data, Pearl said. Instead, we use the innate structures in our brains to communicate with the world, and we use data to interrogate and learn from the world, as witnessed by newborns, who learn many things without explicit instruction.
‘This kind of structure needs to be implemented externally in the data. “Even if we succeed in learning the structure from data through some miracle, we still have to have it in the form that can be transmitted to humans,” Pearl said.
Professor Yejin Choi, a professor at the University of Washington, also highlighted the importance of common sense and the challenges that its absence poses to current AI systems, which are focused on mapping input data to outcomes.
“We know how to solve a data set without solving the underlying task of deep learning today,” Choi said. ‘This is due to the significant difference between AI and human intelligence, especially the knowledge of the world. And common sense is one of the fundamentally missing pieces. ”
Choi also pointed out that the space of reasoning is infinite, and reasoning itself is a generative task and very different from the categorization tasks for which today’s deep learning algorithms and evaluation criteria are suitable. “We never pick up much. We’re just reasoning briefly, and that’s going to be one of the most important fundamental, intellectual challenges we can think of going forward, ‘Choi said.
But how do we achieve common sense and reasoning in AI? Choi proposes a wide range of parallel research areas, including the combination of symbolic and neural representations, the integration of knowledge into reasoning, and the setting of criteria that are not just categorization.
We do not yet know the full path to common sense, Choi said, adding: ‘But one thing is for sure, we can not just get there by raising the tallest building in the world. Therefore, GPT-4, -5 or -6 cannot cut it. ”
VentureBeat
VentureBeat’s mission is to be a digital urban area for technical decision makers to gain knowledge about transforming technology and transactions. Our website provides essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community and gain access to:
- updated information on the topics that interest you,
- our newsletters
- thought leader content and access to our valued opportunities, such as Transform, discounts
- network functions, and more.
Become a member