Continual Learning for Deep Reinforcement Learning with regularization

  • Typ:Masterarbeit
  • Datum:ab sofort
  • Betreuung:

    Yitian Shi



The challenge of catastrophic forgetting in Deep Reinforcement Learning (DRL), particularly in the field of robotic manipulation, is a significant barrier to creating adaptable AI systems. Regularization-based continual learning methods, such as EWC, Memory-Aware Synapses (MAS), Synaptic Intelligence (SI), etc., offer promising approaches to mitigate this issue. For instance, Elastic Weight Consolidation (EWC) is an approach in neural networks for preserving important learned information during the acquisition of new knowledge by identifying and consolidating key neural weights. This research aims to delve into these methods, with a primary focus on EWC, to develop robots capable of learning and adapting over time without losing prior knowledge.




You will embark on a comprehensive exploration of the family of regularization-based continual learning methods dominated by Elastic Weight Consolidation (EWC) as well as Memory Aware Synapses (MAS) and Synaptic Intelligence (SI), specifically applied to DRL algorithms in robotic manipulation. Your work will involve transferring innovative regularization approaches in practical vision-based robotic manipulation scenarios and conducting a rigorous analysis of their effectiveness. The goal is to advance the field of robotic manipulation by enabling robots to learn and adapt over time without forgetting previous knowledge.




Solid knowledge base and experience in computer vision, deep learning.

Basic knowledge in reinforcement learning, good experiences also highly expected.

Coding skills in Python and Linux.