RL-RTree: A Reinforcement Learning-Optimized Dynamic R-Tree for High-Dimensional Spatial Indexing

Spatial indexing in high-dimensional dynamic environments faces critical challenges, including the curse of dimensionality and rapid distribution shifts, which degrade the performance of traditional indexes like R*-trees and static learned indexes. We propose RL-RTree, a dynamic R-tree op...

Full description

Saved in:
Bibliographic Details
Main Author: Yongxin Peng
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11053811/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Spatial indexing in high-dimensional dynamic environments faces critical challenges, including the curse of dimensionality and rapid distribution shifts, which degrade the performance of traditional indexes like R&#x002A;-trees and static learned indexes. We propose RL-RTree, a dynamic R-tree optimization method that integrates Spatial Graph Attention Networks (SGAT) for density-aware embeddings and online reinforcement learning (RL) to adjust query strategies in real-time. The hybrid offline-online architecture decouples embedding learning from runtime policy optimization. Experiments on 10- to 100-dimensional data show that RL-RTree improves query speed by 75.2% and accuracy by 14.3% compared to R&#x002A;-trees. For dynamic scenarios, it achieves <inline-formula> <tex-math notation="LaTeX">$3.7\times $ </tex-math></inline-formula> faster recovery than static indexes and maintains 91.3% accuracy under high noise (<inline-formula> <tex-math notation="LaTeX">$\sigma =0.5$ </tex-math></inline-formula>). This work bridges the gap between learning-based indexing and real-time adaptive systems, enabling sub-second updates for mission-critical applications like autonomous vehicles and recommendation engines. The interpretable RL policies and SGAT embeddings set a new paradigm for robust high-dimensional indexing.
ISSN:2169-3536