Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-Learning
Autonomous navigation is essential for mobile robots to efficiently operate in complex environments. This study investigates Q-learning and Deep Q-learning to improve navigation performance. The research examines their effectiveness in complex maze configurations, focusing on how the epsilon-greedy...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-03-01
|
| Series: | Automation |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2673-4052/6/1/12 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850089546846306304 |
|---|---|
| author | Mouna El Wafi My Abdelkader Youssefi Rachid Dakir Mohamed Bakir |
| author_facet | Mouna El Wafi My Abdelkader Youssefi Rachid Dakir Mohamed Bakir |
| author_sort | Mouna El Wafi |
| collection | DOAJ |
| description | Autonomous navigation is essential for mobile robots to efficiently operate in complex environments. This study investigates Q-learning and Deep Q-learning to improve navigation performance. The research examines their effectiveness in complex maze configurations, focusing on how the epsilon-greedy strategy influences the agent’s ability to reach its goal in minimal time using Q-learning. A distinctive aspect of this work is the adaptive tuning of hyperparameters, where alpha and gamma values are dynamically adjusted throughout training. This eliminates the need for manually fixed parameters and enables the learning algorithm to automatically determine optimal values, ensuring adaptability to diverse environments rather than being constrained to specific cases. By integrating neural networks, Deep Q-learning enhances decision-making in complex navigation tasks. Simulations carried out in MATLAB environments validate the proposed approach, illustrating its effectiveness in resource-constrained systems while preserving robust and efficient decision-making. Experimental results demonstrate that adaptive hyperparameter tuning significantly improves learning efficiency, leading to faster convergence and reduced navigation time. Additionally, Deep Q-learning exhibits superior performance in complex environments, showcasing enhanced decision-making capabilities in high-dimensional state spaces. These findings highlight the advantages of reinforcement learning-based navigation and emphasize how adaptive exploration strategies and dynamic parameter adjustments enhance performance across diverse scenarios. |
| format | Article |
| id | doaj-art-ab01b7d5c98844699e791a7bf4ec8ddd |
| institution | DOAJ |
| issn | 2673-4052 |
| language | English |
| publishDate | 2025-03-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Automation |
| spelling | doaj-art-ab01b7d5c98844699e791a7bf4ec8ddd2025-08-20T02:42:45ZengMDPI AGAutomation2673-40522025-03-01611210.3390/automation6010012Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-LearningMouna El Wafi0My Abdelkader Youssefi1Rachid Dakir2Mohamed Bakir3Engineering Laboratory, Industrial Management and Innovation, Faculty of Sciences and Technics, Hassan First University of Settat, Settat 26000, MoroccoEngineering Laboratory, Industrial Management and Innovation, Faculty of Sciences and Technics, Hassan First University of Settat, Settat 26000, MoroccoLaboratory of Computer Systems & Vision, Polydisciplinary Faculty of Ouarzazate, Ibnou Zohr University, Ouarzazate 45000, MoroccoEngineering Laboratory, Industrial Management and Innovation, Faculty of Sciences and Technics, Hassan First University of Settat, Settat 26000, MoroccoAutonomous navigation is essential for mobile robots to efficiently operate in complex environments. This study investigates Q-learning and Deep Q-learning to improve navigation performance. The research examines their effectiveness in complex maze configurations, focusing on how the epsilon-greedy strategy influences the agent’s ability to reach its goal in minimal time using Q-learning. A distinctive aspect of this work is the adaptive tuning of hyperparameters, where alpha and gamma values are dynamically adjusted throughout training. This eliminates the need for manually fixed parameters and enables the learning algorithm to automatically determine optimal values, ensuring adaptability to diverse environments rather than being constrained to specific cases. By integrating neural networks, Deep Q-learning enhances decision-making in complex navigation tasks. Simulations carried out in MATLAB environments validate the proposed approach, illustrating its effectiveness in resource-constrained systems while preserving robust and efficient decision-making. Experimental results demonstrate that adaptive hyperparameter tuning significantly improves learning efficiency, leading to faster convergence and reduced navigation time. Additionally, Deep Q-learning exhibits superior performance in complex environments, showcasing enhanced decision-making capabilities in high-dimensional state spaces. These findings highlight the advantages of reinforcement learning-based navigation and emphasize how adaptive exploration strategies and dynamic parameter adjustments enhance performance across diverse scenarios.https://www.mdpi.com/2673-4052/6/1/12Q-learningdeep Q-learningreinforcement learningneural networkpath-planning |
| spellingShingle | Mouna El Wafi My Abdelkader Youssefi Rachid Dakir Mohamed Bakir Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-Learning Automation Q-learning deep Q-learning reinforcement learning neural network path-planning |
| title | Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-Learning |
| title_full | Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-Learning |
| title_fullStr | Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-Learning |
| title_full_unstemmed | Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-Learning |
| title_short | Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-Learning |
| title_sort | intelligent robot in unknown environments walk path using q learning and deep q learning |
| topic | Q-learning deep Q-learning reinforcement learning neural network path-planning |
| url | https://www.mdpi.com/2673-4052/6/1/12 |
| work_keys_str_mv | AT mounaelwafi intelligentrobotinunknownenvironmentswalkpathusingqlearninganddeepqlearning AT myabdelkaderyoussefi intelligentrobotinunknownenvironmentswalkpathusingqlearninganddeepqlearning AT rachiddakir intelligentrobotinunknownenvironmentswalkpathusingqlearninganddeepqlearning AT mohamedbakir intelligentrobotinunknownenvironmentswalkpathusingqlearninganddeepqlearning |