Rebalancing Docked Bicycle Sharing System with Approximate Dynamic Programming and Reinforcement Learning

The bicycle, an active transportation mode, has received increasing attention as an alternative in urban environments worldwide. However, effectively managing the stock levels of rental bicycles at each station is challenging as demand levels vary with time, particularly when users are allowed to re...

Full description

Saved in:
Bibliographic Details
Main Authors: Young-Hyun Seo, Dong-Kyu Kim, Seungmo Kang, Young-Ji Byon, Seung-Young Kho
Format: Article
Language:English
Published: Wiley 2022-01-01
Series:Journal of Advanced Transportation
Online Access:http://dx.doi.org/10.1155/2022/2780711
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The bicycle, an active transportation mode, has received increasing attention as an alternative in urban environments worldwide. However, effectively managing the stock levels of rental bicycles at each station is challenging as demand levels vary with time, particularly when users are allowed to return bicycles at any station. There is a need for system-wide management of bicycle stock levels by transporting available bicycles from one station to another. In this study, a bicycle rebalancing model based on a Markov decision process (MDP) is developed using a real-time dynamic programming method and reinforcement learning considering dynamic system characteristics. The pickup and return demands are stochastic and continuously changing. As a result, the proposed framework suggests the best operation option every 10 min based on the realized system variables and future demands predicted by the random forest method, minimizing the expected unmet demand. Moreover, we adopt custom prioritizing strategies to reduce the number of action candidates for the operator and the computational complexity for practicality in the MDP framework. Numerical experiments demonstrate that the proposed model outperforms existing methods, such as short-term rebalancing and static lookahead policies. Among the suggested prioritizing strategies, focusing on stations with a larger error in demand prediction was found to be the most effective. Additionally, the effects of various safety buffers were examined.
ISSN:2042-3195