D2D computation task offloading for efficient federated learning

Federated learning is a kind of distributed machine learning technique.The factor of communication and computation resource constraints at the edge node is becoming the performance bottleneck.In particular,when different edge node has distinct computation and communication capabilities,the model tra...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaoran CAI, Xiaopeng MO, Jie XU
Format: Article
Language:zho
Published: China InfoCom Media Group 2019-12-01
Series:物联网学报
Subjects:
Online Access:http://www.wlwxb.com.cn/zh/article/doi/10.11959/j.issn.2096-3750.2019.00135/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841531179712380928
author Xiaoran CAI
Xiaopeng MO
Jie XU
author_facet Xiaoran CAI
Xiaopeng MO
Jie XU
author_sort Xiaoran CAI
collection DOAJ
description Federated learning is a kind of distributed machine learning technique.The factor of communication and computation resource constraints at the edge node is becoming the performance bottleneck.In particular,when different edge node has distinct computation and communication capabilities,the model training performance may degrade severely,thus necessitating the joint communication and computation optimization.To tackle this challenge,a computational task offloading scheme enabled by device-to-device (D2D) communications was proposed,in which different edge node exchanged data samples via D2D communication links to balance the processing capability and task load,in order to minimize the total time delay for machine learning model training.Simulation results show that compared to the benchmark scheme without such D2D task offloading the training speed and efficiency of federated learning has be improved significantly.
format Article
id doaj-art-89f974e2118442e7a834a99f1c81e290
institution Kabale University
issn 2096-3750
language zho
publishDate 2019-12-01
publisher China InfoCom Media Group
record_format Article
series 物联网学报
spelling doaj-art-89f974e2118442e7a834a99f1c81e2902025-01-15T02:52:43ZzhoChina InfoCom Media Group物联网学报2096-37502019-12-013829059645280D2D computation task offloading for efficient federated learningXiaoran CAIXiaopeng MOJie XUFederated learning is a kind of distributed machine learning technique.The factor of communication and computation resource constraints at the edge node is becoming the performance bottleneck.In particular,when different edge node has distinct computation and communication capabilities,the model training performance may degrade severely,thus necessitating the joint communication and computation optimization.To tackle this challenge,a computational task offloading scheme enabled by device-to-device (D2D) communications was proposed,in which different edge node exchanged data samples via D2D communication links to balance the processing capability and task load,in order to minimize the total time delay for machine learning model training.Simulation results show that compared to the benchmark scheme without such D2D task offloading the training speed and efficiency of federated learning has be improved significantly.http://www.wlwxb.com.cn/zh/article/doi/10.11959/j.issn.2096-3750.2019.00135/federated learningmobile edge computingtask offloadingdevice-to-device communication
spellingShingle Xiaoran CAI
Xiaopeng MO
Jie XU
D2D computation task offloading for efficient federated learning
物联网学报
federated learning
mobile edge computing
task offloading
device-to-device communication
title D2D computation task offloading for efficient federated learning
title_full D2D computation task offloading for efficient federated learning
title_fullStr D2D computation task offloading for efficient federated learning
title_full_unstemmed D2D computation task offloading for efficient federated learning
title_short D2D computation task offloading for efficient federated learning
title_sort d2d computation task offloading for efficient federated learning
topic federated learning
mobile edge computing
task offloading
device-to-device communication
url http://www.wlwxb.com.cn/zh/article/doi/10.11959/j.issn.2096-3750.2019.00135/
work_keys_str_mv AT xiaorancai d2dcomputationtaskoffloadingforefficientfederatedlearning
AT xiaopengmo d2dcomputationtaskoffloadingforefficientfederatedlearning
AT jiexu d2dcomputationtaskoffloadingforefficientfederatedlearning