Fast Context Adaptation in Cost-Aware Continual Learning

In the past few years, Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks with time-varying statistics. However, the increased complexity of 5G and Beyond networks requires correspondingly more complex...

Full description

Saved in:
Bibliographic Details
Main Authors: Seyyidahmed Lahmer, Federico Mason, Federico Chiariotti, Andrea Zanella
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Transactions on Machine Learning in Communications and Networking
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10495063/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850226670525480960
author Seyyidahmed Lahmer
Federico Mason
Federico Chiariotti
Andrea Zanella
author_facet Seyyidahmed Lahmer
Federico Mason
Federico Chiariotti
Andrea Zanella
author_sort Seyyidahmed Lahmer
collection DOAJ
description In the past few years, Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks with time-varying statistics. However, the increased complexity of 5G and Beyond networks requires correspondingly more complex learning agents and the learning process itself might end up competing with users for communication and computational resources. This creates friction: on the one hand, the learning process needs resources to quickly converge to an effective strategy; on the other hand, the learning process needs to be efficient, i.e., take as few resources as possible from the user’s data plane, so as not to throttle users’ Quality of Service (QoS). In this paper, we investigate this trade-off, which we refer to as cost of learning, and propose a dynamic strategy to balance the resources assigned to the data plane and those reserved for learning. With the proposed approach, a learning agent can quickly converge to an efficient resource allocation strategy and adapt to changes in the environment as for the Continual Learning (CL) paradigm, while minimizing the impact on the users’ QoS. Simulation results show that the proposed method outperforms static allocation methods with minimal learning overhead, almost reaching the performance of an ideal out-of-band CL solution.
format Article
id doaj-art-325cfbaa40f84aa897d5cb38dd4290c8
institution OA Journals
issn 2831-316X
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Transactions on Machine Learning in Communications and Networking
spelling doaj-art-325cfbaa40f84aa897d5cb38dd4290c82025-08-20T02:05:01ZengIEEEIEEE Transactions on Machine Learning in Communications and Networking2831-316X2024-01-01247949410.1109/TMLCN.2024.338664710495063Fast Context Adaptation in Cost-Aware Continual LearningSeyyidahmed Lahmer0Federico Mason1https://orcid.org/0000-0001-5681-1695Federico Chiariotti2https://orcid.org/0000-0002-7915-7275Andrea Zanella3https://orcid.org/0000-0003-3671-5190Department of Information Engineering, University of Padua, Padua, ItalyDepartment of Information Engineering, University of Padua, Padua, ItalyDepartment of Information Engineering, University of Padua, Padua, ItalyDepartment of Information Engineering, University of Padua, Padua, ItalyIn the past few years, Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks with time-varying statistics. However, the increased complexity of 5G and Beyond networks requires correspondingly more complex learning agents and the learning process itself might end up competing with users for communication and computational resources. This creates friction: on the one hand, the learning process needs resources to quickly converge to an effective strategy; on the other hand, the learning process needs to be efficient, i.e., take as few resources as possible from the user’s data plane, so as not to throttle users’ Quality of Service (QoS). In this paper, we investigate this trade-off, which we refer to as cost of learning, and propose a dynamic strategy to balance the resources assigned to the data plane and those reserved for learning. With the proposed approach, a learning agent can quickly converge to an efficient resource allocation strategy and adapt to changes in the environment as for the Continual Learning (CL) paradigm, while minimizing the impact on the users’ QoS. Simulation results show that the proposed method outperforms static allocation methods with minimal learning overhead, almost reaching the performance of an ideal out-of-band CL solution.https://ieeexplore.ieee.org/document/10495063/Resource allocationreinforcement learningcost of learningcontinual learningmeta-learningmobile edge computing
spellingShingle Seyyidahmed Lahmer
Federico Mason
Federico Chiariotti
Andrea Zanella
Fast Context Adaptation in Cost-Aware Continual Learning
IEEE Transactions on Machine Learning in Communications and Networking
Resource allocation
reinforcement learning
cost of learning
continual learning
meta-learning
mobile edge computing
title Fast Context Adaptation in Cost-Aware Continual Learning
title_full Fast Context Adaptation in Cost-Aware Continual Learning
title_fullStr Fast Context Adaptation in Cost-Aware Continual Learning
title_full_unstemmed Fast Context Adaptation in Cost-Aware Continual Learning
title_short Fast Context Adaptation in Cost-Aware Continual Learning
title_sort fast context adaptation in cost aware continual learning
topic Resource allocation
reinforcement learning
cost of learning
continual learning
meta-learning
mobile edge computing
url https://ieeexplore.ieee.org/document/10495063/
work_keys_str_mv AT seyyidahmedlahmer fastcontextadaptationincostawarecontinuallearning
AT federicomason fastcontextadaptationincostawarecontinuallearning
AT federicochiariotti fastcontextadaptationincostawarecontinuallearning
AT andreazanella fastcontextadaptationincostawarecontinuallearning