Sampled-data control through model-free reinforcement learning with effective experience replay
Reinforcement Learning (RL) based control algorithms can learn the control strategies for nonlinear and uncertain environment during interacting with it. Guided by the rewards generated by environment, a RL agent can learn the control strategy directly in a model-free way instead of investigating th...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
KeAi Communications Co., Ltd.
2023-02-01
|
| Series: | Journal of Automation and Intelligence |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2949855423000011 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850278414397734912 |
|---|---|
| author | Bo Xiao H.K. Lam Xiaojie Su Ziwei Wang Frank P.-W. Lo Shihong Chen Eric Yeatman |
| author_facet | Bo Xiao H.K. Lam Xiaojie Su Ziwei Wang Frank P.-W. Lo Shihong Chen Eric Yeatman |
| author_sort | Bo Xiao |
| collection | DOAJ |
| description | Reinforcement Learning (RL) based control algorithms can learn the control strategies for nonlinear and uncertain environment during interacting with it. Guided by the rewards generated by environment, a RL agent can learn the control strategy directly in a model-free way instead of investigating the dynamic model of the environment. In the paper, we propose the sampled-data RL control strategy to reduce the computational demand. In the sampled-data control strategy, the whole control system is of a hybrid structure, in which the plant is of continuous structure while the controller (RL agent) adopts a discrete structure. Given that the continuous states of the plant will be the input of the agent, the state–action value function is approximated by the fully connected feed-forward neural networks (FCFFNN). Instead of learning the controller at every step during the interaction with the environment, the learning and acting stages are decoupled to learn the control strategy more effectively through experience replay. In the acting stage, the most effective experience obtained during the interaction with the environment will be stored and during the learning stage, the stored experience will be replayed to customized times, which helps enhance the experience replay process.The effectiveness of proposed approach will be verified by simulation examples. |
| format | Article |
| id | doaj-art-eefac0a080e0465790b1777a01329b1f |
| institution | OA Journals |
| issn | 2949-8554 |
| language | English |
| publishDate | 2023-02-01 |
| publisher | KeAi Communications Co., Ltd. |
| record_format | Article |
| series | Journal of Automation and Intelligence |
| spelling | doaj-art-eefac0a080e0465790b1777a01329b1f2025-08-20T01:49:31ZengKeAi Communications Co., Ltd.Journal of Automation and Intelligence2949-85542023-02-0121203010.1016/j.jai.2023.100018Sampled-data control through model-free reinforcement learning with effective experience replayBo Xiao0H.K. Lam1Xiaojie Su2Ziwei Wang3Frank P.-W. Lo4Shihong Chen5Eric Yeatman6Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK; Corresponding authors.Department of Engineering, King’s College London, London WC2B 4BG, UK; Corresponding authors.Department of Automation, Chongqing University, Shapingba District, Chong Qing, ChinaSchool of Engineering, Lancaster University, LA1 4YW, UKHamlyn Centre, Imperial College London, London SW7 2AZ, UKDepartment of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UKDepartment of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UKReinforcement Learning (RL) based control algorithms can learn the control strategies for nonlinear and uncertain environment during interacting with it. Guided by the rewards generated by environment, a RL agent can learn the control strategy directly in a model-free way instead of investigating the dynamic model of the environment. In the paper, we propose the sampled-data RL control strategy to reduce the computational demand. In the sampled-data control strategy, the whole control system is of a hybrid structure, in which the plant is of continuous structure while the controller (RL agent) adopts a discrete structure. Given that the continuous states of the plant will be the input of the agent, the state–action value function is approximated by the fully connected feed-forward neural networks (FCFFNN). Instead of learning the controller at every step during the interaction with the environment, the learning and acting stages are decoupled to learn the control strategy more effectively through experience replay. In the acting stage, the most effective experience obtained during the interaction with the environment will be stored and during the learning stage, the stored experience will be replayed to customized times, which helps enhance the experience replay process.The effectiveness of proposed approach will be verified by simulation examples.http://www.sciencedirect.com/science/article/pii/S2949855423000011Reinforcement learningNeural networksSampled-data controlModel-freeEffective experience replay |
| spellingShingle | Bo Xiao H.K. Lam Xiaojie Su Ziwei Wang Frank P.-W. Lo Shihong Chen Eric Yeatman Sampled-data control through model-free reinforcement learning with effective experience replay Journal of Automation and Intelligence Reinforcement learning Neural networks Sampled-data control Model-free Effective experience replay |
| title | Sampled-data control through model-free reinforcement learning with effective experience replay |
| title_full | Sampled-data control through model-free reinforcement learning with effective experience replay |
| title_fullStr | Sampled-data control through model-free reinforcement learning with effective experience replay |
| title_full_unstemmed | Sampled-data control through model-free reinforcement learning with effective experience replay |
| title_short | Sampled-data control through model-free reinforcement learning with effective experience replay |
| title_sort | sampled data control through model free reinforcement learning with effective experience replay |
| topic | Reinforcement learning Neural networks Sampled-data control Model-free Effective experience replay |
| url | http://www.sciencedirect.com/science/article/pii/S2949855423000011 |
| work_keys_str_mv | AT boxiao sampleddatacontrolthroughmodelfreereinforcementlearningwitheffectiveexperiencereplay AT hklam sampleddatacontrolthroughmodelfreereinforcementlearningwitheffectiveexperiencereplay AT xiaojiesu sampleddatacontrolthroughmodelfreereinforcementlearningwitheffectiveexperiencereplay AT ziweiwang sampleddatacontrolthroughmodelfreereinforcementlearningwitheffectiveexperiencereplay AT frankpwlo sampleddatacontrolthroughmodelfreereinforcementlearningwitheffectiveexperiencereplay AT shihongchen sampleddatacontrolthroughmodelfreereinforcementlearningwitheffectiveexperiencereplay AT ericyeatman sampleddatacontrolthroughmodelfreereinforcementlearningwitheffectiveexperiencereplay |