Federated Reinforcement Learning in Stock Trading Execution: The FPPO Algorithm for Information Security

Stock trading execution is a critical component in the complex financial market landscape, and the development of a robust trade execution framework is essential for financial institutions pursuing profitability. This paper presents the Federated Proximal Policy Optimization (FPPO) algorithm, an ada...

Full description

Saved in:
Bibliographic Details
Main Authors: Haogang Feng, Yue Wang, Shida Zhong, Tao Yuan, Zhi Quan
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10872909/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1823857135130247168
author Haogang Feng
Yue Wang
Shida Zhong
Tao Yuan
Zhi Quan
author_facet Haogang Feng
Yue Wang
Shida Zhong
Tao Yuan
Zhi Quan
author_sort Haogang Feng
collection DOAJ
description Stock trading execution is a critical component in the complex financial market landscape, and the development of a robust trade execution framework is essential for financial institutions pursuing profitability. This paper presents the Federated Proximal Policy Optimization (FPPO) algorithm, an adaptive trade execution framework that leverages joint reinforcement learning. The FPPO algorithm demonstrates significant improvements in model performance across various stocks, with average returns enhanced by 3% to 15%. It also exhibits superior performance in key metrics such as the reward function value, showcasing its effectiveness in different financial contexts. The paper further explores the model’s performance under the FPPO algorithm with varying numbers of client nodes and different risk preferences, underscoring the importance of these factors in model construction. The results substantiate the FPPO algorithm’s capability to safeguard privacy, ensure high performance, and enable the creation of personalized trading models in the optimal trade execution problem. This positions investors to gain a competitive edge in the dynamic and complex financial markets. Although the FPPO algorithm demonstrates significant potential in trade execution optimization, it may need to integrate a broader range of real-world variables and develop advanced privacy-preserving mechanisms to enhance its applicability in diverse financial contexts.
format Article
id doaj-art-6cbdfb4cd27d41b0a1112b5599a97714
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-6cbdfb4cd27d41b0a1112b5599a977142025-02-12T00:01:51ZengIEEEIEEE Access2169-35362025-01-0113250742508610.1109/ACCESS.2025.353885910872909Federated Reinforcement Learning in Stock Trading Execution: The FPPO Algorithm for Information SecurityHaogang Feng0https://orcid.org/0000-0001-9714-9060Yue Wang1https://orcid.org/0000-0002-8526-1300Shida Zhong2https://orcid.org/0000-0003-2330-8166Tao Yuan3https://orcid.org/0000-0002-9525-3814Zhi Quan4https://orcid.org/0000-0001-8108-2893State Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen, ChinaCollege of Electronics and Information Engineering, Shenzhen University, Shenzhen, ChinaState Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen, ChinaState Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen, ChinaState Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen, ChinaStock trading execution is a critical component in the complex financial market landscape, and the development of a robust trade execution framework is essential for financial institutions pursuing profitability. This paper presents the Federated Proximal Policy Optimization (FPPO) algorithm, an adaptive trade execution framework that leverages joint reinforcement learning. The FPPO algorithm demonstrates significant improvements in model performance across various stocks, with average returns enhanced by 3% to 15%. It also exhibits superior performance in key metrics such as the reward function value, showcasing its effectiveness in different financial contexts. The paper further explores the model’s performance under the FPPO algorithm with varying numbers of client nodes and different risk preferences, underscoring the importance of these factors in model construction. The results substantiate the FPPO algorithm’s capability to safeguard privacy, ensure high performance, and enable the creation of personalized trading models in the optimal trade execution problem. This positions investors to gain a competitive edge in the dynamic and complex financial markets. Although the FPPO algorithm demonstrates significant potential in trade execution optimization, it may need to integrate a broader range of real-world variables and develop advanced privacy-preserving mechanisms to enhance its applicability in diverse financial contexts.https://ieeexplore.ieee.org/document/10872909/Federated reinforcement learningoptimal executionprivacy protection
spellingShingle Haogang Feng
Yue Wang
Shida Zhong
Tao Yuan
Zhi Quan
Federated Reinforcement Learning in Stock Trading Execution: The FPPO Algorithm for Information Security
IEEE Access
Federated reinforcement learning
optimal execution
privacy protection
title Federated Reinforcement Learning in Stock Trading Execution: The FPPO Algorithm for Information Security
title_full Federated Reinforcement Learning in Stock Trading Execution: The FPPO Algorithm for Information Security
title_fullStr Federated Reinforcement Learning in Stock Trading Execution: The FPPO Algorithm for Information Security
title_full_unstemmed Federated Reinforcement Learning in Stock Trading Execution: The FPPO Algorithm for Information Security
title_short Federated Reinforcement Learning in Stock Trading Execution: The FPPO Algorithm for Information Security
title_sort federated reinforcement learning in stock trading execution the fppo algorithm for information security
topic Federated reinforcement learning
optimal execution
privacy protection
url https://ieeexplore.ieee.org/document/10872909/
work_keys_str_mv AT haogangfeng federatedreinforcementlearninginstocktradingexecutionthefppoalgorithmforinformationsecurity
AT yuewang federatedreinforcementlearninginstocktradingexecutionthefppoalgorithmforinformationsecurity
AT shidazhong federatedreinforcementlearninginstocktradingexecutionthefppoalgorithmforinformationsecurity
AT taoyuan federatedreinforcementlearninginstocktradingexecutionthefppoalgorithmforinformationsecurity
AT zhiquan federatedreinforcementlearninginstocktradingexecutionthefppoalgorithmforinformationsecurity