Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems”
This addresses errors in [1]. Due to a production error, Figs. 4, 5, 6, 8, and 9 are not rendering correctly in the article PDF. The correct figures are as follows. Figure 4. Mountains—Closed-loop trajectories before training (left) and after training (middle and right) over 100 randomly sampled ini...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Open Journal of Control Systems |
Online Access: | https://ieeexplore.ieee.org/document/10870044/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832542599843414016 |
---|---|
author | Luca Furieri Clara Lucia Galimberti Giancarlo Ferrari-Trecate |
author_facet | Luca Furieri Clara Lucia Galimberti Giancarlo Ferrari-Trecate |
author_sort | Luca Furieri |
collection | DOAJ |
description | This addresses errors in [1]. Due to a production error, Figs. 4, 5, 6, 8, and 9 are not rendering correctly in the article PDF. The correct figures are as follows. Figure 4. Mountains—Closed-loop trajectories before training (left) and after training (middle and right) over 100 randomly sampled initial conditions marked with $\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 5. Mountains—Closed-loop trajectories after 25%, 50% and 75% of the total training whose closed-loop trajectory is shown in Fig. 4. Even if the performance can be further optimized, stability is always guaranteed. Figure 6. Mountains—Closed-loop trajectories after training. (Left and middle) Controller tested over a system with mass uncertainty (-10% and +10%, respectively). (Right) Trained controller with safety promotion through (45). Training initial conditions marked with $\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 8. Mountains—Closed-loop trajectories when using the online policy given by (48). Snapshots of three trajectories starting at different test initial conditions. Figure 9. Mountains—Three different closed-loop trajectories after training a REN controller without ${\mathcal{L}}_{2}$ stability guarantees over 100 randomly sampled initial conditions marked with $\circ$. Colored (gray) lines show the trajectories in (after) the training time interval. |
format | Article |
id | doaj-art-aebd7863377c4853980d0c37fd13cb3d |
institution | Kabale University |
issn | 2694-085X |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Open Journal of Control Systems |
spelling | doaj-art-aebd7863377c4853980d0c37fd13cb3d2025-02-04T00:00:52ZengIEEEIEEE Open Journal of Control Systems2694-085X2025-01-014535310.1109/OJCSYS.2025.352936110870044Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems”Luca Furieri0https://orcid.org/0000-0001-6103-4480Clara Lucia Galimberti1https://orcid.org/0000-0003-0700-6811Giancarlo Ferrari-Trecate2https://orcid.org/0000-0002-9492-9624École Polytechnique Fédérale de Lausanne, Lausanne, SwitzerlandÉcole Polytechnique Fédérale de Lausanne, Lausanne, SwitzerlandÉcole Polytechnique Fédérale de Lausanne, Lausanne, SwitzerlandThis addresses errors in [1]. Due to a production error, Figs. 4, 5, 6, 8, and 9 are not rendering correctly in the article PDF. The correct figures are as follows. Figure 4. Mountains—Closed-loop trajectories before training (left) and after training (middle and right) over 100 randomly sampled initial conditions marked with $\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 5. Mountains—Closed-loop trajectories after 25%, 50% and 75% of the total training whose closed-loop trajectory is shown in Fig. 4. Even if the performance can be further optimized, stability is always guaranteed. Figure 6. Mountains—Closed-loop trajectories after training. (Left and middle) Controller tested over a system with mass uncertainty (-10% and +10%, respectively). (Right) Trained controller with safety promotion through (45). Training initial conditions marked with $\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 8. Mountains—Closed-loop trajectories when using the online policy given by (48). Snapshots of three trajectories starting at different test initial conditions. Figure 9. Mountains—Three different closed-loop trajectories after training a REN controller without ${\mathcal{L}}_{2}$ stability guarantees over 100 randomly sampled initial conditions marked with $\circ$. Colored (gray) lines show the trajectories in (after) the training time interval.https://ieeexplore.ieee.org/document/10870044/ |
spellingShingle | Luca Furieri Clara Lucia Galimberti Giancarlo Ferrari-Trecate Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems” IEEE Open Journal of Control Systems |
title | Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems” |
title_full | Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems” |
title_fullStr | Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems” |
title_full_unstemmed | Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems” |
title_short | Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems” |
title_sort | erratum to x201c learning to boost the performance of stable nonlinear systems x201d |
url | https://ieeexplore.ieee.org/document/10870044/ |
work_keys_str_mv | AT lucafurieri erratumtox201clearningtoboosttheperformanceofstablenonlinearsystemsx201d AT claraluciagalimberti erratumtox201clearningtoboosttheperformanceofstablenonlinearsystemsx201d AT giancarloferraritrecate erratumtox201clearningtoboosttheperformanceofstablenonlinearsystemsx201d |