COMPARATIVE ANALYSIS OF DOUBLE DEEP Q-NETWORK AND PROXIMAL POLICY OPTIMIZATION FOR LANE-KEEPING IN AUTONOMOUS DRIVING
Lane-keeping is a vital function in autonomous driving, important for vehicle safety, stability, and adherence to traffic flow. The intricacy of lane-keeping control resides in balancing precision and responsiveness across varied driving circumstances. This article gives a comparative examinati...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Information Technology Publishing House
2025-02-01
|
| Series: | Problems of Information Society |
| Online Access: | https://jpis.az/uploads/article/en/2025_1/COMPARATIVE_ANALYSIS_OF_DOUBLE_DEEP_Q-NETWORK_AND_PROXIMAL_POLICY_OPTIMIZATION_FOR_LANE-KEEPING_IN_AUTONOMOUS_DRIVING.pdf |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Lane-keeping is a vital function in autonomous driving, important for vehicle safety,
stability, and adherence to traffic flow. The intricacy of lane-keeping control resides
in balancing precision and responsiveness across varied driving circumstances. This
article gives a comparative examination of two reinforcement learning (RL)
algorithms—Double Deep Q-Network and Proximal Policy Optimization—for lanekeeping across discrete and continuous action spaces. Double DQN, an upgrade of
standard Deep Q-Networks, eliminates overestimation bias in Q-values,
demonstrating its usefulness in discrete action spaces. This method shines in lowdimensional environments like highways, where lane-keeping requires frequent,
discrete modifications. In contrast, PPO, a strong policy-gradient method built for
continuous control, performs well in high-dimensional situations, such as urban
roadways and curved highways, where continual, accurate steering changes are
necessary. The methods were tested in MATLAB/Simulink simulations that simulate
both highway and urban driving circumstances. Each model integrates vehicle
dynamics and neural network topologies to build control techniques. Results
demonstrate that Double DQN consistently maintains lane position in highway
settings, exploiting its ability to minimize overestimations in Q-values, thereby
attaining stable lane centering. PPO outshines in dynamic and unpredictable
settings, managing continual control adjustments well, especially under difficult
traffic conditions and on curving roadways. This study underscores the importance
of matching RL algorithms to the action-space requirements of specific driving
environments, with Double DQN excelling in discrete tasks and PPO in continuous
adaptive control, contributing valuable insights toward enhancing the flexibility and
safety of autonomous vehicles. |
|---|---|
| ISSN: | 2077-964X 2309-7566 |