Model-Free Generative Replay For Lifelong Reinforcement Learning: Application To Starcraft-2

Citation

Daniels, Z., Raghavan, A., Hostetler, J., Rahman, A., Sur, I., Piacentino, M., & Divakaran, A. (2022). Model-free generative replay for lifelong reinforcement learning: Application to starcraft-2. arXiv preprint arXiv:2208.05056.

Abstract

One approach to meet the challenges of deep lifelong reinforcement learning (LRL) is careful management of the agent’s learning experiences, in order to learn (without forgetting) and build internal meta-models (of the tasks, environments, agents, and world). Generative replay (GR) is a biologically-inspired replay mechanism that augments learning experiences with self-labelled examples drawn from an internal generative model that is updated over time. We present a version of GR for LRL that satisfies two desiderata: (a) Introspective density modelling of the latent representations of policies learned using deep RL, and (b) Model-free end-to-end learning. In this paper, we study three deep learning architectures for model-free GR, starting from a naive GR and adding ingredients to achieve (a) and (b). We evaluate our proposed algorithms on three different scenarios comprising tasks from the Starcraft 2 and Minigrid domains. We report several key findings showing the impact of the design choices on quantitative metrics that include transfer learning, generalization to unseen tasks, fast adaptation after task change, performance comparable to a task expert, and minimizing catastrophic forgetting. We observe that our GR prevents drift in the features-to-action mapping from the latent vector space of a deep RL agent. We also show improvements in established lifelong learning metrics. We find that a small random replay buffer significantly increases the stability of training when combined with the experience replay buffer and the generated replay buffer. Overall, we find that “hidden replay” (a well-known architecture for class-incremental classification) is the most promising approach that pushes the state-of-the-art in GR for LRL, and observe that the architecture of the sleep model might be more important for improving performance than the types of replay used. Our experiments required only 6% of training samples to achieve 80-90% of expert performance in most Starcraft 2 scenarios.


Read more from SRI