DS-HPPO Deep Reinforcement Learning for Optimal DVFS Control on an Image Signal Processor
In this paper, we introduce the optimal DVFS control, formulated as a sequential decision-making problem, which aims to maximize the overall energy efficiency of processing the specific image task on an ISP chip. We apply the Parameterized Action Markov Decision Process (PAMDP) to model the above op...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11027120/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In this paper, we introduce the optimal DVFS control, formulated as a sequential decision-making problem, which aims to maximize the overall energy efficiency of processing the specific image task on an ISP chip. We apply the Parameterized Action Markov Decision Process (PAMDP) to model the above optimization problem and solve the PAMDP with the proposed DS-HPPO deep reinforcement learning algorithm. We design and implement a verification system for the DS-HPPO-based optimal DVFS control policy by leveraging a self-designed ISP chip. Compared with the energy efficiency under the default operating condition of the ISP chip, the proposed DS-HPPO-based optimal DVFS control policy achieves a 33.988% improvement in energy efficiency. Furthermore, when accounting for the impact of manufacturing process variations on performance, power consumption, and chip area, the experimental result clearly demonstrates our work’s effectiveness and superiority over state-of-the-art works. |
|---|---|
| ISSN: | 2169-3536 |