Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies

In this paper, we present a multi-agent deep reinforcement learning (deep RL) framework for network slicing in a dynamic environment with multiple base stations and multiple users. In particular, we propose a novel deep RL framework with multiple actors and centralized critic (MACC) in which actors...

Full description

Saved in:
Bibliographic Details
Main Authors: Feng Wang, M. Cenk Gursoy, Senem Velipasalar
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Transactions on Machine Learning in Communications and Networking
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10322663/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850036025797115904
author Feng Wang
M. Cenk Gursoy
Senem Velipasalar
author_facet Feng Wang
M. Cenk Gursoy
Senem Velipasalar
author_sort Feng Wang
collection DOAJ
description In this paper, we present a multi-agent deep reinforcement learning (deep RL) framework for network slicing in a dynamic environment with multiple base stations and multiple users. In particular, we propose a novel deep RL framework with multiple actors and centralized critic (MACC) in which actors are implemented as pointer networks to fit the varying dimension of input. We evaluate the performance of the proposed deep RL algorithm via simulations to demonstrate its effectiveness. Subsequently, we develop a deep RL based jammer with limited prior information and limited power budget. The goal of the jammer is to minimize the transmission rates achieved with network slicing and thus degrade the network slicing agents’ performance. We design a jammer with both listening and jamming phases and address jamming location optimization as well as jamming channel optimization via deep RL. We evaluate the jammer at the optimized location, generating interference attacks in the optimized set of channels by switching between the jamming phase and listening phase. We show that the proposed jammer can significantly reduce the victims’ performance without direct feedback or prior knowledge on the network slicing policies. Finally, we devise a Nash-equilibrium-supervised policy ensemble mixed strategy profile for network slicing (as a defensive measure) and jamming. We evaluate the performance of the proposed policy ensemble algorithm by applying on the network slicing agents and the jammer agent in simulations to show its effectiveness.
format Article
id doaj-art-d29ea95130ae4995a3fcfd7e9d8a58ad
institution DOAJ
issn 2831-316X
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Transactions on Machine Learning in Communications and Networking
spelling doaj-art-d29ea95130ae4995a3fcfd7e9d8a58ad2025-08-20T02:57:19ZengIEEEIEEE Transactions on Machine Learning in Communications and Networking2831-316X2024-01-012496310.1109/TMLCN.2023.333423610322663Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive StrategiesFeng Wang0https://orcid.org/0000-0001-8071-9995M. Cenk Gursoy1https://orcid.org/0000-0002-7352-1013Senem Velipasalar2https://orcid.org/0000-0002-1430-1555Department of Electrical Engineering and Computer Science, Syracuse University, Syracuse, NY, USADepartment of Electrical Engineering and Computer Science, Syracuse University, Syracuse, NY, USADepartment of Electrical Engineering and Computer Science, Syracuse University, Syracuse, NY, USAIn this paper, we present a multi-agent deep reinforcement learning (deep RL) framework for network slicing in a dynamic environment with multiple base stations and multiple users. In particular, we propose a novel deep RL framework with multiple actors and centralized critic (MACC) in which actors are implemented as pointer networks to fit the varying dimension of input. We evaluate the performance of the proposed deep RL algorithm via simulations to demonstrate its effectiveness. Subsequently, we develop a deep RL based jammer with limited prior information and limited power budget. The goal of the jammer is to minimize the transmission rates achieved with network slicing and thus degrade the network slicing agents’ performance. We design a jammer with both listening and jamming phases and address jamming location optimization as well as jamming channel optimization via deep RL. We evaluate the jammer at the optimized location, generating interference attacks in the optimized set of channels by switching between the jamming phase and listening phase. We show that the proposed jammer can significantly reduce the victims’ performance without direct feedback or prior knowledge on the network slicing policies. Finally, we devise a Nash-equilibrium-supervised policy ensemble mixed strategy profile for network slicing (as a defensive measure) and jamming. We evaluate the performance of the proposed policy ensemble algorithm by applying on the network slicing agents and the jammer agent in simulations to show its effectiveness.https://ieeexplore.ieee.org/document/10322663/Network slicingdynamic channel accessdeep reinforcement learningmulti-agent actor-criticadversarial learningpolicy ensemble
spellingShingle Feng Wang
M. Cenk Gursoy
Senem Velipasalar
Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies
IEEE Transactions on Machine Learning in Communications and Networking
Network slicing
dynamic channel access
deep reinforcement learning
multi-agent actor-critic
adversarial learning
policy ensemble
title Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies
title_full Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies
title_fullStr Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies
title_full_unstemmed Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies
title_short Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies
title_sort robust network slicing multi agent policies adversarial attacks and defensive strategies
topic Network slicing
dynamic channel access
deep reinforcement learning
multi-agent actor-critic
adversarial learning
policy ensemble
url https://ieeexplore.ieee.org/document/10322663/
work_keys_str_mv AT fengwang robustnetworkslicingmultiagentpoliciesadversarialattacksanddefensivestrategies
AT mcenkgursoy robustnetworkslicingmultiagentpoliciesadversarialattacksanddefensivestrategies
AT senemvelipasalar robustnetworkslicingmultiagentpoliciesadversarialattacksanddefensivestrategies