Optimizing the Pairs-Trading Strategy Using Deep Reinforcement Learning with Trading and Stop-Loss Boundaries
Many researchers have tried to optimize pairs trading as the numbers of opportunities for arbitrage profit have gradually decreased. Pairs trading is a market-neutral strategy; it profits if the given condition is satisfied within a given trading window, and if not, there is a risk of loss. In this...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2019-01-01
|
| Series: | Complexity |
| Online Access: | http://dx.doi.org/10.1155/2019/3582516 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849307357923770368 |
|---|---|
| author | Taewook Kim Ha Young Kim |
| author_facet | Taewook Kim Ha Young Kim |
| author_sort | Taewook Kim |
| collection | DOAJ |
| description | Many researchers have tried to optimize pairs trading as the numbers of opportunities for arbitrage profit have gradually decreased. Pairs trading is a market-neutral strategy; it profits if the given condition is satisfied within a given trading window, and if not, there is a risk of loss. In this study, we propose an optimized pairs-trading strategy using deep reinforcement learning—particularly with the deep Q-network—utilizing various trading and stop-loss boundaries. More specifically, if spreads hit trading thresholds and reverse to the mean, the agent receives a positive reward. However, if spreads hit stop-loss thresholds or fail to reverse to the mean after hitting the trading thresholds, the agent receives a negative reward. The agent is trained to select the optimum level of discretized trading and stop-loss boundaries given a spread to maximize the expected sum of discounted future profits. Pairs are selected from stocks on the S&P 500 Index using a cointegration test. We compared our proposed method with traditional pairs-trading strategies which use constant trading and stop-loss boundaries. We find that our proposed model is trained well and outperforms traditional pairs-trading strategies. |
| format | Article |
| id | doaj-art-28e8fd7a3e524f8a874817d2c7cf883d |
| institution | Kabale University |
| issn | 1076-2787 1099-0526 |
| language | English |
| publishDate | 2019-01-01 |
| publisher | Wiley |
| record_format | Article |
| series | Complexity |
| spelling | doaj-art-28e8fd7a3e524f8a874817d2c7cf883d2025-08-20T03:54:47ZengWileyComplexity1076-27871099-05262019-01-01201910.1155/2019/35825163582516Optimizing the Pairs-Trading Strategy Using Deep Reinforcement Learning with Trading and Stop-Loss BoundariesTaewook Kim0Ha Young Kim1Qraft Technologies, Inc., Ttukseom-ro 1-gil, Sungdong-gu, Seoul 04778, Republic of KoreaGraduate School of Information, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of KoreaMany researchers have tried to optimize pairs trading as the numbers of opportunities for arbitrage profit have gradually decreased. Pairs trading is a market-neutral strategy; it profits if the given condition is satisfied within a given trading window, and if not, there is a risk of loss. In this study, we propose an optimized pairs-trading strategy using deep reinforcement learning—particularly with the deep Q-network—utilizing various trading and stop-loss boundaries. More specifically, if spreads hit trading thresholds and reverse to the mean, the agent receives a positive reward. However, if spreads hit stop-loss thresholds or fail to reverse to the mean after hitting the trading thresholds, the agent receives a negative reward. The agent is trained to select the optimum level of discretized trading and stop-loss boundaries given a spread to maximize the expected sum of discounted future profits. Pairs are selected from stocks on the S&P 500 Index using a cointegration test. We compared our proposed method with traditional pairs-trading strategies which use constant trading and stop-loss boundaries. We find that our proposed model is trained well and outperforms traditional pairs-trading strategies.http://dx.doi.org/10.1155/2019/3582516 |
| spellingShingle | Taewook Kim Ha Young Kim Optimizing the Pairs-Trading Strategy Using Deep Reinforcement Learning with Trading and Stop-Loss Boundaries Complexity |
| title | Optimizing the Pairs-Trading Strategy Using Deep Reinforcement Learning with Trading and Stop-Loss Boundaries |
| title_full | Optimizing the Pairs-Trading Strategy Using Deep Reinforcement Learning with Trading and Stop-Loss Boundaries |
| title_fullStr | Optimizing the Pairs-Trading Strategy Using Deep Reinforcement Learning with Trading and Stop-Loss Boundaries |
| title_full_unstemmed | Optimizing the Pairs-Trading Strategy Using Deep Reinforcement Learning with Trading and Stop-Loss Boundaries |
| title_short | Optimizing the Pairs-Trading Strategy Using Deep Reinforcement Learning with Trading and Stop-Loss Boundaries |
| title_sort | optimizing the pairs trading strategy using deep reinforcement learning with trading and stop loss boundaries |
| url | http://dx.doi.org/10.1155/2019/3582516 |
| work_keys_str_mv | AT taewookkim optimizingthepairstradingstrategyusingdeepreinforcementlearningwithtradingandstoplossboundaries AT hayoungkim optimizingthepairstradingstrategyusingdeepreinforcementlearningwithtradingandstoplossboundaries |