Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-Learning
Autonomous navigation is essential for mobile robots to efficiently operate in complex environments. This study investigates Q-learning and Deep Q-learning to improve navigation performance. The research examines their effectiveness in complex maze configurations, focusing on how the epsilon-greedy...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-03-01
|
| Series: | Automation |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2673-4052/6/1/12 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Autonomous navigation is essential for mobile robots to efficiently operate in complex environments. This study investigates Q-learning and Deep Q-learning to improve navigation performance. The research examines their effectiveness in complex maze configurations, focusing on how the epsilon-greedy strategy influences the agent’s ability to reach its goal in minimal time using Q-learning. A distinctive aspect of this work is the adaptive tuning of hyperparameters, where alpha and gamma values are dynamically adjusted throughout training. This eliminates the need for manually fixed parameters and enables the learning algorithm to automatically determine optimal values, ensuring adaptability to diverse environments rather than being constrained to specific cases. By integrating neural networks, Deep Q-learning enhances decision-making in complex navigation tasks. Simulations carried out in MATLAB environments validate the proposed approach, illustrating its effectiveness in resource-constrained systems while preserving robust and efficient decision-making. Experimental results demonstrate that adaptive hyperparameter tuning significantly improves learning efficiency, leading to faster convergence and reduced navigation time. Additionally, Deep Q-learning exhibits superior performance in complex environments, showcasing enhanced decision-making capabilities in high-dimensional state spaces. These findings highlight the advantages of reinforcement learning-based navigation and emphasize how adaptive exploration strategies and dynamic parameter adjustments enhance performance across diverse scenarios. |
|---|---|
| ISSN: | 2673-4052 |