Optimizing Pretrained Autonomous Driving Models Using Deep Reinforcement Learning

Vision-based end-to-end navigation systems have shown impressive capabilities, especially when combined with Imitation Learning (IL) and advanced Deep Learning architectures, such as Transformers. One such example is CIL++, a Transformer-based architecture that learns to map navigation states to veh...

Full description

Saved in:
Bibliographic Details
Main Authors: Vasileios Kochliaridis, Ioannis Vlahavas
Format: Article
Language:English
Published: MDPI AG 2025-07-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/15/8411
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Vision-based end-to-end navigation systems have shown impressive capabilities, especially when combined with Imitation Learning (IL) and advanced Deep Learning architectures, such as Transformers. One such example is CIL++, a Transformer-based architecture that learns to map navigation states to vehicle controls based on expert demonstrations only. Nevertheless, reliance on experts’ datasets limits generalization and can lead to failures in unknown circumstances. Deep Reinforcement Learning (DRL) can address this issue by fine-tuning the pretrained policy, using a reward function that aims to improve its weaknesses through interaction with the environment. However, fine-tuning with DRL can lead to the Catastrophic Forgetting (CF) problem, where a policy forgets the expert behaviors learned from the demonstrations as it learns to optimize the new reward function. In this paper, we present CILRLv3, a DRL-based training method that is immune to CF, enabling pretrained navigation agents to improve their driving skills across new scenarios.
ISSN:2076-3417