Controlling Cable Driven Parallel Robots Operations—Deep Reinforcement Learning Approach
Deep Reinforcement Learning (DRL) is a powerful approach for generating control strategies for a variety of complex systems, representing an emerging paradigm in control applications. An important feature of Deep RL is that it does not explicitly model the process, but instead it relies on optimizat...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10877801/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Deep Reinforcement Learning (DRL) is a powerful approach for generating control strategies for a variety of complex systems, representing an emerging paradigm in control applications. An important feature of Deep RL is that it does not explicitly model the process, but instead it relies on optimization-driven techniques to devise effective control policies. Despite its remarkable success in simulated environments, RL holds great potential in real-world applications. This article explores the complex challenges involved in implementing Deep Reinforcement Learning (DRL) algorithms on a cable-driven parallel robot. A key contribution of this work as specific advancement is the integration of a Proportional-Integral-Derivative (PID) controller within the RL framework, establishing a unique approach to CDPR control that leverages adaptive learning capabilities. A Reinforcement Learning (RL) agent for reference tracking is trained using the novel application of the adaptive-featured Twin Delayed Deep Deterministic (TD3) policy gradient algorithm, tailored to enhance CDPR adaptability and precision in dynamic environments. The first step is to test the performance of the trained agent on point-to-point robotic application tasks. As a result of such tasks, it is possible to evaluate the level of adaptability and performance of the RL agent. Multiple experiments are conducted to assess the versatility of the RL agent involving linear and circular scenarios. This research significantly advances the field by demonstrating the applicability of RL for complex robotic structures like CDPRs, showcasing promising results that underline the robustness and adaptability of the proposed approach. As a result of the TD3 adaptive learning process, the trained agent is able to perform the designated action in order to determine which policy stands out as the most rewarding. |
|---|---|
| ISSN: | 2169-3536 |