LazyAct: Lazy actor with dynamic state skip based on constrained MDP.
Deep reinforcement learning has achieved significant success in complex decision-making tasks. However, the high computational cost of policies based on deep neural networks restricts their practical application. Specifically, each decision made by an agent requires a complete neural network computa...
Saved in:
Main Authors: | Hongjie Zhang, Zhenyu Chen, Hourui Deng, Chaosheng Feng |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2025-01-01
|
Series: | PLoS ONE |
Online Access: | https://doi.org/10.1371/journal.pone.0318778 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Tool Refactoring Otomatis untuk Menangani Lazy Class Code Smell dengan Pendekatan Software Metrics
by: Umi Sa'adah, et al.
Published: (2022-08-01) -
Computationally expensive constrained problems via surrogate-assisted dynamic population evolutionary optimization
by: Zan Yang, et al.
Published: (2025-01-01) -
Constraining the equation of state in neutron-star cores via the long-ringdown signal
by: Christian Ecker, et al.
Published: (2025-02-01) -
Challenging the state. Churches as political actors in South Africa. 1980-1994
by: JJ Lubbe
Published: (2000-06-01) -
Non-state actor perceptions of legitimacy and meaningful participation in international climate governance
by: Lisa Dellmuth, et al.
Published: (2025-02-01)