Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle driving
Cyber-physical systems (CPSs) in autonomous vehicles must handle highly dynamic and uncertain settings, where unanticipated impediments, shifting traffic conditions, and environmental changes all provide substantial decision-making issues. Deep reinforcement learning (DRL) has emerged as a strong to...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
PeerJ Inc.
2025-06-01
|
| Series: | PeerJ Computer Science |
| Subjects: | |
| Online Access: | https://peerj.com/articles/cs-2823.pdf |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850130295376838656 |
|---|---|
| author | Manal Abdullah Alohali Hamed Alqahtani Abdulbasit Darem Monir Abdullah Yunyoung Nam Mohamed Abouhawwash |
| author_facet | Manal Abdullah Alohali Hamed Alqahtani Abdulbasit Darem Monir Abdullah Yunyoung Nam Mohamed Abouhawwash |
| author_sort | Manal Abdullah Alohali |
| collection | DOAJ |
| description | Cyber-physical systems (CPSs) in autonomous vehicles must handle highly dynamic and uncertain settings, where unanticipated impediments, shifting traffic conditions, and environmental changes all provide substantial decision-making issues. Deep reinforcement learning (DRL) has emerged as a strong tool for dealing with such uncertainty, yet current DRL models struggle to ensure safety and optimal behaviour in indeterminate settings due to the difficulties of understanding dynamic reward systems. To address these constraints, this study incorporates double deep Q networks (DDQN) to improve the agent’s adaptability under uncertain driving conditions. A structured reward system is established to accommodate real-time fluctuations, resulting in safer and more efficient decision-making. The study acknowledges the technological limitations of automobile CPSs and investigates hardware acceleration as a potential remedy in addition to algorithmic enhancements. Because of their post-manufacturing adaptability, parallel processing capabilities, and reconfigurability, field programmable gate arrays (FPGAs) are used to execute reinforcement learning in real-time. Using essential parameters, including collision rate, behaviour similarity, travel distance, speed control, total rewards, and timesteps, the suggested method is thoroughly tested in the TORCS Racing Simulator. The findings show that combining FPGA-based hardware acceleration with DDQN successfully improves computational efficiency and decision-making reliability, tackling significant issues brought on by uncertainty in autonomous driving CPSs. In addition to advancing reinforcement learning applications in CPSs, this work opens up possibilities for future investigations into real-world generalisation, adaptive reward mechanisms, and scalable hardware implementations to further reduce uncertainty in autonomous systems. |
| format | Article |
| id | doaj-art-4e001ea1cf134a08a84e35eeec026fbd |
| institution | OA Journals |
| issn | 2376-5992 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | PeerJ Inc. |
| record_format | Article |
| series | PeerJ Computer Science |
| spelling | doaj-art-4e001ea1cf134a08a84e35eeec026fbd2025-08-20T02:32:44ZengPeerJ Inc.PeerJ Computer Science2376-59922025-06-0111e282310.7717/peerj-cs.2823Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle drivingManal Abdullah Alohali0Hamed Alqahtani1Abdulbasit Darem2Monir Abdullah3Yunyoung Nam4Mohamed Abouhawwash5Department of Information Systems, Princess Nourah bint Abdulrahman University, Riyadh, Saudi ArabiaDepartment of Information Systems, Abha, Saudi ArabiaDepartment of Computer Science, Northern Border University, Arar, Saudi ArabiaDepartment of Computer Science and Artificial Intelligence, Bisha, Saudi ArabiaDepartment of ICT Convergence, Soonchunhyang University, Asan, Republic of South KoreaDepartment of Computational Mathematics, Michigan State University, East Lansing, United StatesCyber-physical systems (CPSs) in autonomous vehicles must handle highly dynamic and uncertain settings, where unanticipated impediments, shifting traffic conditions, and environmental changes all provide substantial decision-making issues. Deep reinforcement learning (DRL) has emerged as a strong tool for dealing with such uncertainty, yet current DRL models struggle to ensure safety and optimal behaviour in indeterminate settings due to the difficulties of understanding dynamic reward systems. To address these constraints, this study incorporates double deep Q networks (DDQN) to improve the agent’s adaptability under uncertain driving conditions. A structured reward system is established to accommodate real-time fluctuations, resulting in safer and more efficient decision-making. The study acknowledges the technological limitations of automobile CPSs and investigates hardware acceleration as a potential remedy in addition to algorithmic enhancements. Because of their post-manufacturing adaptability, parallel processing capabilities, and reconfigurability, field programmable gate arrays (FPGAs) are used to execute reinforcement learning in real-time. Using essential parameters, including collision rate, behaviour similarity, travel distance, speed control, total rewards, and timesteps, the suggested method is thoroughly tested in the TORCS Racing Simulator. The findings show that combining FPGA-based hardware acceleration with DDQN successfully improves computational efficiency and decision-making reliability, tackling significant issues brought on by uncertainty in autonomous driving CPSs. In addition to advancing reinforcement learning applications in CPSs, this work opens up possibilities for future investigations into real-world generalisation, adaptive reward mechanisms, and scalable hardware implementations to further reduce uncertainty in autonomous systems.https://peerj.com/articles/cs-2823.pdfAutonomous vehiclesCyber-physical systemsReinforcement learningDeep Q networksEmbedded technology |
| spellingShingle | Manal Abdullah Alohali Hamed Alqahtani Abdulbasit Darem Monir Abdullah Yunyoung Nam Mohamed Abouhawwash Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle driving PeerJ Computer Science Autonomous vehicles Cyber-physical systems Reinforcement learning Deep Q networks Embedded technology |
| title | Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle driving |
| title_full | Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle driving |
| title_fullStr | Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle driving |
| title_full_unstemmed | Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle driving |
| title_short | Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle driving |
| title_sort | integrating cyber physical systems with embedding technology for controlling autonomous vehicle driving |
| topic | Autonomous vehicles Cyber-physical systems Reinforcement learning Deep Q networks Embedded technology |
| url | https://peerj.com/articles/cs-2823.pdf |
| work_keys_str_mv | AT manalabdullahalohali integratingcyberphysicalsystemswithembeddingtechnologyforcontrollingautonomousvehicledriving AT hamedalqahtani integratingcyberphysicalsystemswithembeddingtechnologyforcontrollingautonomousvehicledriving AT abdulbasitdarem integratingcyberphysicalsystemswithembeddingtechnologyforcontrollingautonomousvehicledriving AT monirabdullah integratingcyberphysicalsystemswithembeddingtechnologyforcontrollingautonomousvehicledriving AT yunyoungnam integratingcyberphysicalsystemswithembeddingtechnologyforcontrollingautonomousvehicledriving AT mohamedabouhawwash integratingcyberphysicalsystemswithembeddingtechnologyforcontrollingautonomousvehicledriving |