GoalNet: Goal Areas Oriented Pedestrian Trajectory Prediction

Predicting the future trajectories of pedestrians on the road is an important task for autonomous driving. The pedestrian trajectory prediction is affected by scene paths, pedestrian’s intentions and decision-making, which is a multi-modal problem. Relying solely on historical coordinates...

Full description

Saved in:
Bibliographic Details
Main Authors: Amar Fadillah, Ching-Lin Lee, Zhi-Xuan Wang, Kuan-Ting Lai
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11079594/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Predicting the future trajectories of pedestrians on the road is an important task for autonomous driving. The pedestrian trajectory prediction is affected by scene paths, pedestrian’s intentions and decision-making, which is a multi-modal problem. Relying solely on historical coordinates pedestrian data represents the most straightforward method for pedestrian trajectory prediction. Nevertheless, the accuracy achieved by this method is limited, primarily because it fails to account for the crucial scene paths impacting the pedestrian. Instead of predicting the future trajectory directly, we propose to use scene context and observed trajectory to predict the goal points first, and then reuse the goal points to predict the future trajectories. By leveraging the information from scene context and observed trajectory, the uncertainty can be limited to a few target areas, which represent the “goals” of the pedestrians. In this paper, we propose GoalNet, a new trajectory prediction neural network based on the goal areas of a pedestrian. Our network can predict both pedestrian’s trajectories and bounding boxes. The overall model is efficient and modular, and its outputs can be changed according to the usage scenario. Experimental results show that GoalNet significantly improves the previous state-of-the-art performance by 48.7% on the JAAD and 40.8% on the PIE dataset.
ISSN:2169-3536