• Latest
  • Trending
Scientists Working on Continual Learning to Overcome ‘Catastrophic Forgetting’ 

Scientists Working on Continual Learning to Overcome ‘Catastrophic Forgetting’ 

September 30, 2021
Just-In: Ethereum Merge Most Likely In August, Says Vitalik Buterin

Just-In: Ethereum Merge Most Likely In August, Says Vitalik Buterin

May 20, 2022
Trader Predicts Crypto Market Will Mimic 2018 Bear Season – Here’s How High Bitcoin Could Go Before Nuking Lower

Trader Predicts Crypto Market Will Mimic 2018 Bear Season – Here’s How High Bitcoin Could Go Before Nuking Lower

May 20, 2022
Terraform Labs, Luna Foundation Guard Bought 3.06m AVAX in total: Avalanche Foundation

Terraform Labs, Luna Foundation Guard Bought 3.06m AVAX in total: Avalanche Foundation

May 20, 2022

TD SYNNEX expands solution offering with Google Cloud

May 20, 2022

Creating an ML Web App and Deploying it on AWS

May 20, 2022
Will Fan Tokens Replace Memecoins Like Shiba Inu and Dogecoin?

Will Fan Tokens Replace Memecoins Like Shiba Inu and Dogecoin?

May 20, 2022
Goldman Sachs: Crypto Drawdown Will Have Little Impact on U.S. Economy

Goldman Sachs: Crypto Drawdown Will Have Little Impact on U.S. Economy

May 20, 2022
Crypto Bear Market: Pantera Partner Sees These Buying Opportunities

Crypto Bear Market: Pantera Partner Sees These Buying Opportunities

May 20, 2022
Australias Commonwealth Bank Halts Crypto Rollout

Australias Commonwealth Bank Halts Crypto Rollout

May 20, 2022
Commonwealth Bank puts crypto trading trial on ice as regulators dither

Commonwealth Bank puts crypto trading trial on ice as regulators dither

May 20, 2022
Ethereum devs tip The Merge will occur in August ‘if everything goes to plan’

Ethereum devs tip The Merge will occur in August ‘if everything goes to plan’

May 20, 2022
Beware, Bitcoin Jumping Back Above $30,000 Could Be A Dead Cat Bounce, Here’s why

Beware, Bitcoin Jumping Back Above $30,000 Could Be A Dead Cat Bounce, Here’s why

May 20, 2022
Deep Tech Central
Monday, June 27, 2022
Subscription
Sign Up
  • News
    • Artificial Intelligence
    • Crypto
    • CyberSecurity
    • IoT
    • Robotics
    • Quantum Computing
    • Sustainability
    • Telecom
  • Videos
  • DTC – UNV
No Result
View All Result
Deeptech Central
No Result
View All Result

Scientists Working on Continual Learning to Overcome ‘Catastrophic Forgetting’ 

by DeepTech Central
September 30, 2021
in Artificial Intelligence
0

By John P. Desmond, AI Trends Editor 

Algorithms are trained on a dataset and may not be capable of learning new information without retraining, as opposed to the human brain, which learns constantly and builds on knowledge over time. 

YOU MAY ALSO LIKE

Creating an ML Web App and Deploying it on AWS

Now You Don’t Need To Present Your Credit Card At Checkout If You Bind Your Facial Images/ Hand Features To Your MasterCard Credit Card

An artificial neural network that forgets previously learned information upon learning new information, is demonstrating what is called “catastrophic forgetting.”   

In an effort to nudge AI to work more like the human brain in this regard, a team of AI and neuroscience researchers have banded together to form ContinualAI, a nonprofit organization and open community of AI continual learning (CL) enthusiasts, described in a recent account in VentureBeat. 

ContinualAI recently announced Avalanche, a library of tools compiled over the course of a year from over 40 contributors with the goal of making CL research easier and more reproducible. The organization was launched three years ago and has attracted support for its mission to advance CL, which its members see as fundamental for the future of AI.  

“ContinualAI was founded with the idea of pushing the boundaries of science through distributed, open collaboration,” stated Vincenzo Lomonaco, cofounding president of the organization and assistant professor at the University of Pisa in Italy. “We provide a comprehensive platform to produce, discuss, and share original research in AI. And we do this completely for free, for anyone.” 

OpenAI research scientist Jeff Clune, who helped to cofound Uber AI Labs in 2017, has called catastrophic forgetting the “Achilles’ heel” of machine learning. In an effort to address it, he published a paper last year detailing ANML (A Neuromodulated Meta-Learning Algorithm), an algorithm that managed to learn 600 sequential tasks with minimal catastrophic forgetting by “meta-learning” solutions to problems instead of manually engineering solutions. Separately, Alphabet’s DeepMind published research in 2017 suggesting that catastrophic forgetting isn’t an insurmountable challenge for neural networks.   

Keiland Cooper, a neuroscience research associate, University of California, Irvine

Still, the answer is elusive. “The potential of continual learning exceeds catastrophic forgetting and begins to touch on more interesting questions of implementing other cognitive learning properties in AI,” stated Keiland Cooper, a cofounding member of ContinualAI and a neuroscience research associate at the University of California, Irvine, in an interview with VentureBeat. As an example, he mentioned transfer learning, a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. In deep learning, pretrained models are used as a starting point on computer vision and NLP tasks, owing to the vast compute and time resources to develop neural network models on these problems. 

Continual AI has grown to over 1,000 members since its founding. “While there has been a renewed interest in continual learning in AI research, the neuroscience of how humans and animals can accomplish these feats is still largely unknown,” Cooper stated. He sees the organization as being in a good position to enable more interaction with AI research, cognitive scientists, and neuroscientists to collaborate. 

Google DeepMind Sees a Non-Stationary World   

Raia Hadsel, research scientist, Google DeepMind, London

Raia Hadsel is a research scientist at Google DeepMind in London, whose research focus includes continual and transfer learning. “Artificial intelligence research has seen enormous progress over the past few decades, but it predominantly relies on fixed datasets and stationary environments,” she stated as lead author of a paper on continual learning published last December in Trends in Cognitive Science. “Continual learning is an increasingly relevant area of study that asks how artificial systems might learn sequentially, as biological systems do, from a continuous stream of correlated data.”  

A benchmark for success in AI is the ability to emulate human learning such as for recognizing images, playing games or driving a car. “We then develop machine learning models that can match or exceed these given enough training data. This paradigm puts the emphasis on the end result, rather than the learning process, and overlooks a critical characteristic of human learning: that it is robust to changing tasks and sequential experience.” 

The world is non-stationary, unlike frozen machine learning models, “so human learning has evolved to thrive in dynamic learning settings,” the authors state. “However, this robustness is in stark contrast to the most powerful modern machine learning methods, which perform well only when presented with data that are carefully shuffled, balanced, and homogenized. Not only do these models underperform when presented with changing or incremental data regimes, in some cases they fail completely or suffer from rapid performance degradation on earlier learned tasks, known as catastrophic forgetting.”  

Facebook Releases Open Source Models to Further CL Research  

Thus, work is ongoing to make AI smarter. Recent research from Facebook describes a new open source benchmark and model for continual learning, called CTrL.    

“We expect that CL models will require less supervision, sidestepping one of the most significant shortcomings of modern AI systems: their reliance on large human-labeled data sets,” states lead author Marc’Aurelio Ranzato, research scientist, in the post on the Facebook AI Research blog. 

The model aims to measure how well knowledge is transferred between two tasks, how well the CL model retains previously learned skills (thus avoiding catastrophic forgetting) and how it scales to a large number of tasks. “Until now, there has been no effective standard benchmark for evaluating CL systems across these axes,” the author states. The research was conducted in collaboration with Sorbonne University of Paris, France.  

CTrL works by evaluating the amount of knowledge transferred from a sequence of observed tasks, thus the model’s ability to transfer to similar tasks. The benchmark proposes numerous streams of tasks to assess multiple dimensions of transfer, and a long sequence of tasks for assessing the ability of CL models to scale. 

The researchers also propose a new model, called Modular Networks with Task-Driven Priors (MNTDP), that when confronting new tasks, determines which previously learned models can be applied. 

“We’ve long been committed to the principles of open science, and this research is still in its early stages, so we’re excited to work with the community on advancing CL,” the authors state. 

Read the source articles and information in VentureBeat,  in Trends in Cognitive Science and on the Facebook AI Research blog. 

Share196Tweet123Share49

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

Search

No Result
View All Result

Recent News

  • Just-In: Ethereum Merge Most Likely In August, Says Vitalik Buterin
  • Trader Predicts Crypto Market Will Mimic 2018 Bear Season – Here’s How High Bitcoin Could Go Before Nuking Lower
  • Terraform Labs, Luna Foundation Guard Bought 3.06m AVAX in total: Avalanche Foundation
  • About
  • Privacy Policy
  • Sign Up
  • Contact Us
  • About
  • Contact
  • Deeptech Central
  • Elementor #10628
  • Newsletter
  • Privacy Policy
  • Sign Up

© 2018-2021 DeepTech Central. - by MintMore Inc..

No Result
View All Result
  • News
    • Artificial Intelligence
    • Crypto
    • CyberSecurity
    • IoT
    • Robotics
    • Quantum Computing
    • Sustainability
    • Telecom
  • Videos
  • DTC – UNV

© 2018-2021 DeepTech Central. - by MintMore Inc..

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.

Stay Updated. Subscribe Today.

Join the community of 10K+ scholars & entrepreneurs.