Reinforcement Learning and Multi-Access Edge Computing for 6G-Based Underwater Wireless Networks

6G networks are envisioned to dramatically enhance the connectivity landscape by integrating communication across ground, air, and sea environments. In the aquatic domain, the Internet of Underwater Things (IoUT) represents a global network of intelligent underwater devices designed to capture, inte...

Full description

Saved in:
Bibliographic Details
Main Authors: Juan Carlos Cepeda-Pacheco, Mari Carmen Domingo
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10947688/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850211510024929280
author Juan Carlos Cepeda-Pacheco
Mari Carmen Domingo
author_facet Juan Carlos Cepeda-Pacheco
Mari Carmen Domingo
author_sort Juan Carlos Cepeda-Pacheco
collection DOAJ
description 6G networks are envisioned to dramatically enhance the connectivity landscape by integrating communication across ground, air, and sea environments. In the aquatic domain, the Internet of Underwater Things (IoUT) represents a global network of intelligent underwater devices designed to capture, interpret, and share data. Although Underwater Acoustic Communications (UAC) has become widespread as a solution for transmitting information, data collection from Underwater Sensor Nodes (USNs) to the surface results in extensive delays and higher energy consumption. Edge communication emerges as a solution to address these challenges. In this approach, Autonomous Underwater Vehicles (AUVs) bring edge computing as close as possible to the source devices. This paper proposes an innovative AUV-based Multi-Access Edge Computing (MEC) system where cluster-heads that collect data from IoUT devices offload their associated computational tasks to local AUVs. These AUVs are strategically positioned to execute tasks either fully locally, partially, or by offloading them entirely to a more resource-equipped AUV (AUV MEC). We achieve this by jointly optimizing the task offloading strategy, resource allocation, and the trajectories of the AUVs. We formulate a non-convex optimization problem to minimize the weighted sum of service delays for all local AUVs and their energy consumption. To address the NP-hard nature of this problem, we employ a deep reinforcement learning algorithm, Deep Deterministic Policy Gradient (DDPG), to solve it. Extensive simulations have been conducted to evaluate the effectiveness of our proposed communication system. The results show that our proposed algorithm outperforms the Total Offloading (Offloading), Local Execution (Locally), and Actor-Critic (AC) algorithms.
format Article
id doaj-art-27f1302210464f42bcd04da1cf4e75e5
institution OA Journals
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-27f1302210464f42bcd04da1cf4e75e52025-08-20T02:09:32ZengIEEEIEEE Access2169-35362025-01-0113606276064210.1109/ACCESS.2025.355715810947688Reinforcement Learning and Multi-Access Edge Computing for 6G-Based Underwater Wireless NetworksJuan Carlos Cepeda-Pacheco0https://orcid.org/0000-0001-9932-9428Mari Carmen Domingo1https://orcid.org/0000-0002-6901-3817Network Engineering Department, Universitat Politècnica de Catalunya-Barcelona Tech (UPC), Castelldefels, SpainNetwork Engineering Department, Universitat Politècnica de Catalunya-Barcelona Tech (UPC), Castelldefels, Spain6G networks are envisioned to dramatically enhance the connectivity landscape by integrating communication across ground, air, and sea environments. In the aquatic domain, the Internet of Underwater Things (IoUT) represents a global network of intelligent underwater devices designed to capture, interpret, and share data. Although Underwater Acoustic Communications (UAC) has become widespread as a solution for transmitting information, data collection from Underwater Sensor Nodes (USNs) to the surface results in extensive delays and higher energy consumption. Edge communication emerges as a solution to address these challenges. In this approach, Autonomous Underwater Vehicles (AUVs) bring edge computing as close as possible to the source devices. This paper proposes an innovative AUV-based Multi-Access Edge Computing (MEC) system where cluster-heads that collect data from IoUT devices offload their associated computational tasks to local AUVs. These AUVs are strategically positioned to execute tasks either fully locally, partially, or by offloading them entirely to a more resource-equipped AUV (AUV MEC). We achieve this by jointly optimizing the task offloading strategy, resource allocation, and the trajectories of the AUVs. We formulate a non-convex optimization problem to minimize the weighted sum of service delays for all local AUVs and their energy consumption. To address the NP-hard nature of this problem, we employ a deep reinforcement learning algorithm, Deep Deterministic Policy Gradient (DDPG), to solve it. Extensive simulations have been conducted to evaluate the effectiveness of our proposed communication system. The results show that our proposed algorithm outperforms the Total Offloading (Offloading), Local Execution (Locally), and Actor-Critic (AC) algorithms.https://ieeexplore.ieee.org/document/10947688/Deep deterministic policy gradient (DDPG)multi-access edge computingInternet of Underwater Things (IoUT)autonomous underwater vehicles (AUVs)trajectory optimization
spellingShingle Juan Carlos Cepeda-Pacheco
Mari Carmen Domingo
Reinforcement Learning and Multi-Access Edge Computing for 6G-Based Underwater Wireless Networks
IEEE Access
Deep deterministic policy gradient (DDPG)
multi-access edge computing
Internet of Underwater Things (IoUT)
autonomous underwater vehicles (AUVs)
trajectory optimization
title Reinforcement Learning and Multi-Access Edge Computing for 6G-Based Underwater Wireless Networks
title_full Reinforcement Learning and Multi-Access Edge Computing for 6G-Based Underwater Wireless Networks
title_fullStr Reinforcement Learning and Multi-Access Edge Computing for 6G-Based Underwater Wireless Networks
title_full_unstemmed Reinforcement Learning and Multi-Access Edge Computing for 6G-Based Underwater Wireless Networks
title_short Reinforcement Learning and Multi-Access Edge Computing for 6G-Based Underwater Wireless Networks
title_sort reinforcement learning and multi access edge computing for 6g based underwater wireless networks
topic Deep deterministic policy gradient (DDPG)
multi-access edge computing
Internet of Underwater Things (IoUT)
autonomous underwater vehicles (AUVs)
trajectory optimization
url https://ieeexplore.ieee.org/document/10947688/
work_keys_str_mv AT juancarloscepedapacheco reinforcementlearningandmultiaccessedgecomputingfor6gbasedunderwaterwirelessnetworks
AT maricarmendomingo reinforcementlearningandmultiaccessedgecomputingfor6gbasedunderwaterwirelessnetworks