By AI Trends Staff
Scientists are working on creating better AI that learns through self-supervision, with the pinnacle being AI that could learn like a baby, based on observation of its environment and interaction with people.
This would be an important advance because AI has limitations based on the volume of data required to train machine learning algorithms, and the brittleness of the algorithms when it comes to adjusting to changing circumstances.
Yann LeCun, chief AI scientist at Facebook
“This is the single most important problem to solve in AI today,” stated Yann LeCun, chief AI scientist at Facebook, in an account in the Wall Street Journal. Some early success with self-supervised learning has been seen in the natural language processing used in mobile phones, smart speakers, and customer service bots.
Training AI today is time-consuming and expensive. The promise of self-supervised learning is for AI to train itself without the need for external labels attached to the data. Dr. LeCun is now focused on applying self-supervised learning to computer vision, a more complex problem in which computers interpret images such as a person’s face.
The next phase, which he thinks is possible in the next decade or two, is to create a machine that can “learn how the world works by watching video, listening to audio, and reading text,” he stated.
More than one approach is being tried to help AI learn by itself. One is the neuro-symbolic approach, which combines deep learning and symbolic AI, which represents human knowledge explicitly as facts and rules. IBM is experimenting with this approach in its development of a bot that works alongside human engineers, reading computer logs to look for system failure, understand why a system crashed and offer a remedy. This could increase the pace of scientific discovery, with its ability to spot patterns not otherwise evident, according to Dario Gil, director of IBM Research. “It would help us address huge problems, such as climate change and developing vaccines,” he stated.
Child Psychologists Working with Computer Scientists on MESS
DARPA is working with the University of California at Berkeley on a research project, Machine Common Sense, funding collaborations between child psychologists and computer scientists. The system is called MESS, for Model-Building, Exploratory, Social Learning System.
Alison Gopnik, Professor of Psychology, University of California, Berkeley and the author of “The Philosophical Baby”
“Human babies are the best learners in the universe. How do they do it? And could we get an AI to do the same?,” queried Alison Gopnik, a professor of psychology at Berkeley and the author of “The Philosophical Baby” and “The Scientist in the Crib,” among other books, in a recent article she wrote for the Wall Street Journal.
“Even with a lot of supervised data, AIs can’t make the same kinds of generalizations that human children can,” Gopnik said. “Their knowledge is much narrower and more limited, and they are easily fooled. Current AIs are like children with super-helicopter-tiger moms—programs that hover over the learner dictating whether it is right or wrong at every step. The helicoptered AI children can be very good at learning to do specific things well, but they fall apart when it comes to resilience and creativity. A small change in the learning problem means that they have to start all over again.”
The scientists are also experimenting with AI that is motivated by curiosity, which leads to a more resilient learning style, called “active learning” and is a frontier in AI research.
The challenge of the DARPA Machine Common Sense program is to design an AI that understands the basic features of the world as well as an 18-month-old. “Some computer scientists are trying to build common sense models into the AIs, though this isn’t easy. But it is even harder to design an AI that can actually learn those models the way that children do,” Dr. Gopnik wrote. “Hybrid systems that combine models with machine learning are one of the most exciting developments at the cutting edge of current AI.”
Training AI models on labeled datasets is likely to play a diminished role as self-supervised learning comes into wider use, LeCun said during a session at the virtual International Conference on Learning Representation (ICLR) 2020, which also included Turing Award winner and Canadian computer scientist Yoshua Bengio.
The way that self-supervised learning algorithms generate labels from data by exposing relationships between the data’s parts is an advantage.
“Most of what we learn as humans and most of what animals learn is in a self-supervised mode, not a reinforcement mode. It’s basically observing the world and interacting with it a little bit, mostly by observation in a test-independent way,” stated LeCun, in an account from VentureBeat. “This is the type of [learning] that we don’t know how to reproduce with machines.”
Bengio was optimistic about the potential for AI to gain from the field of neuroscience, in particular for its explorations of consciousness and conscious processing. Bengio predicted that new studies will clarify the way high-level semantic variables connect with how the brain processes information, including visual information. These variables that humans communicate using language could lead to an entirely new generation of deep learning models, he suggested.
“There’s a lot of progress that could be achieved by bringing together things like grounded language learning, where we’re jointly trying to understand a model of the world and how high-level concepts are related to each other,” said Bengio. “Human conscious processing is exploiting assumptions about how the world might change, which can be conveniently implemented as a high-level representation.”
Bengio Delivered NeurIPS 2019 Talk on System 2 Self-Supervised Models
At the 2019 Conference on Neural Information Processing Systems (NeurIPS 2019), Bengio spoke on this topic in a keynote speech entitled, “From System 1 Deep Learning to System 2 Deep Learning,” with System 2 referring to self-supervised models.
“We want to have machines that understand the world, that build good world models, that understand cause and effect, and can act in the world to acquire knowledge,” he said in an account in TechTalks.
The intelligent systems should be able to generalize to different distributions in data, just as children learn to adapt as the environment changes around them. “We need systems that can handle those changes and do continual learning, lifelong learning, and so on,” Bengio stated. “This is a long-standing goal for machine learning, but we haven’t yet built a solution to this.”