AlphaRouter: Bridging the Gap Between Reinforcement Learning and Optimization for Vehicle Routing with Monte Carlo Tree Searches

Deep reinforcement learning (DRL) as a routing problem solver has shown promising results in recent studies. However, an inherent gap exists between computationally driven DRL and optimization-based heuristics. While a DRL algorithm for a certain problem is able to solve several similar problem inst...

Full description

Saved in:
Bibliographic Details
Main Authors: Won-Jun Kim, Junho Jeong, Taeyeong Kim, Kichun Lee
Format: Article
Language:English
Published: MDPI AG 2025-02-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/27/3/251
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850090571729731584
author Won-Jun Kim
Junho Jeong
Taeyeong Kim
Kichun Lee
author_facet Won-Jun Kim
Junho Jeong
Taeyeong Kim
Kichun Lee
author_sort Won-Jun Kim
collection DOAJ
description Deep reinforcement learning (DRL) as a routing problem solver has shown promising results in recent studies. However, an inherent gap exists between computationally driven DRL and optimization-based heuristics. While a DRL algorithm for a certain problem is able to solve several similar problem instances, traditional optimization algorithms focus on optimizing solutions to one specific problem instance. In this paper, we propose an approach, AlphaRouter, which solves routing problems while bridging the gap between reinforcement learning and optimization. Fitting to routing problems, our approach first proposes attention-enabled policy and value networks consisting of a policy network that produces a probability distribution over all possible nodes and a value network that produces the expected distance from any given state. We modify a Monte Carlo tree search (MCTS) for the routing problems, selectively combining it with the routing problems. Our experiments demonstrate that the combined approach is promising and yields better solutions compared to original reinforcement learning (RL) approaches without MCTS, with good performance comparable to classical heuristics.
format Article
id doaj-art-5cc63ea830104a24a4738744b18df65d
institution DOAJ
issn 1099-4300
language English
publishDate 2025-02-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj-art-5cc63ea830104a24a4738744b18df65d2025-08-20T02:42:32ZengMDPI AGEntropy1099-43002025-02-0127325110.3390/e27030251AlphaRouter: Bridging the Gap Between Reinforcement Learning and Optimization for Vehicle Routing with Monte Carlo Tree SearchesWon-Jun Kim0Junho Jeong1Taeyeong Kim2Kichun Lee3Hyundai Glovis, Seoul 685-700, Republic of KoreaDepartment of Industrial Engineering, College of Engineering, Hanyang University, Seoul 133-791, Republic of KoreaDepartment of Industrial Engineering, College of Engineering, Hanyang University, Seoul 133-791, Republic of KoreaDepartment of Industrial Engineering, College of Engineering, Hanyang University, Seoul 133-791, Republic of KoreaDeep reinforcement learning (DRL) as a routing problem solver has shown promising results in recent studies. However, an inherent gap exists between computationally driven DRL and optimization-based heuristics. While a DRL algorithm for a certain problem is able to solve several similar problem instances, traditional optimization algorithms focus on optimizing solutions to one specific problem instance. In this paper, we propose an approach, AlphaRouter, which solves routing problems while bridging the gap between reinforcement learning and optimization. Fitting to routing problems, our approach first proposes attention-enabled policy and value networks consisting of a policy network that produces a probability distribution over all possible nodes and a value network that produces the expected distance from any given state. We modify a Monte Carlo tree search (MCTS) for the routing problems, selectively combining it with the routing problems. Our experiments demonstrate that the combined approach is promising and yields better solutions compared to original reinforcement learning (RL) approaches without MCTS, with good performance comparable to classical heuristics.https://www.mdpi.com/1099-4300/27/3/251deep reinforcement learningreinforcement learningMCTSvehicle routing problem
spellingShingle Won-Jun Kim
Junho Jeong
Taeyeong Kim
Kichun Lee
AlphaRouter: Bridging the Gap Between Reinforcement Learning and Optimization for Vehicle Routing with Monte Carlo Tree Searches
Entropy
deep reinforcement learning
reinforcement learning
MCTS
vehicle routing problem
title AlphaRouter: Bridging the Gap Between Reinforcement Learning and Optimization for Vehicle Routing with Monte Carlo Tree Searches
title_full AlphaRouter: Bridging the Gap Between Reinforcement Learning and Optimization for Vehicle Routing with Monte Carlo Tree Searches
title_fullStr AlphaRouter: Bridging the Gap Between Reinforcement Learning and Optimization for Vehicle Routing with Monte Carlo Tree Searches
title_full_unstemmed AlphaRouter: Bridging the Gap Between Reinforcement Learning and Optimization for Vehicle Routing with Monte Carlo Tree Searches
title_short AlphaRouter: Bridging the Gap Between Reinforcement Learning and Optimization for Vehicle Routing with Monte Carlo Tree Searches
title_sort alpharouter bridging the gap between reinforcement learning and optimization for vehicle routing with monte carlo tree searches
topic deep reinforcement learning
reinforcement learning
MCTS
vehicle routing problem
url https://www.mdpi.com/1099-4300/27/3/251
work_keys_str_mv AT wonjunkim alpharouterbridgingthegapbetweenreinforcementlearningandoptimizationforvehicleroutingwithmontecarlotreesearches
AT junhojeong alpharouterbridgingthegapbetweenreinforcementlearningandoptimizationforvehicleroutingwithmontecarlotreesearches
AT taeyeongkim alpharouterbridgingthegapbetweenreinforcementlearningandoptimizationforvehicleroutingwithmontecarlotreesearches
AT kichunlee alpharouterbridgingthegapbetweenreinforcementlearningandoptimizationforvehicleroutingwithmontecarlotreesearches