Interpretability Study of Gradient Information in Individual Travel Prediction

With the development of intelligent transportation systems (ITS), individual travel prediction has become a key technology for optimizing urban transportation. However, deep learning models are limited in decision-sensitive scenarios due to their lack of interpretability. To address the shortcomings...

Full description

Saved in:
Bibliographic Details
Main Authors: Ziheng Su, Pengfei Zhang, Xiaohui Song, Yifan Li
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/10/5269
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849711433985556480
author Ziheng Su
Pengfei Zhang
Xiaohui Song
Yifan Li
author_facet Ziheng Su
Pengfei Zhang
Xiaohui Song
Yifan Li
author_sort Ziheng Su
collection DOAJ
description With the development of intelligent transportation systems (ITS), individual travel prediction has become a key technology for optimizing urban transportation. However, deep learning models are limited in decision-sensitive scenarios due to their lack of interpretability. To address the shortcomings of existing XAI methods in analyzing the dynamic features of historical travel sequences, this paper introduces an alternative interpretability method based on gradient information, overcoming the interpretability bottleneck of travel prediction models. This method calculates the gradient information of input features relative to the prediction result, breaking through the limitations of traditional interpreters that only analyze static features. It can trace the contribution weights of key time points in historical travel sequences while maintaining low computational cost. The experimental results show that features with higher gradients significantly affect predictions—masking the maximum-gradient feature reduces accuracy by approximately 30%. Descending-order masking strategies exhibit the strongest impact, highlighting nonlinear interactions among features. Contribution maps visualize how gradients capture regular patterns and anomalies. The method proposed in this paper provides a valuable tool for understanding the underlying principles of travel prediction models, bridging the gap in existing methods for temporal sequence analysis.
format Article
id doaj-art-9b921762e08f4e3880b98b6b47583e3d
institution DOAJ
issn 2076-3417
language English
publishDate 2025-05-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj-art-9b921762e08f4e3880b98b6b47583e3d2025-08-20T03:14:38ZengMDPI AGApplied Sciences2076-34172025-05-011510526910.3390/app15105269Interpretability Study of Gradient Information in Individual Travel PredictionZiheng Su0Pengfei Zhang1Xiaohui Song2Yifan Li3Institute of Physics, Henan Academy of Sciences, Zhengzhou 450046, ChinaInstitute of Physics, Henan Academy of Sciences, Zhengzhou 450046, ChinaInstitute of Physics, Henan Academy of Sciences, Zhengzhou 450046, ChinaInstitute of Physics, Henan Academy of Sciences, Zhengzhou 450046, ChinaWith the development of intelligent transportation systems (ITS), individual travel prediction has become a key technology for optimizing urban transportation. However, deep learning models are limited in decision-sensitive scenarios due to their lack of interpretability. To address the shortcomings of existing XAI methods in analyzing the dynamic features of historical travel sequences, this paper introduces an alternative interpretability method based on gradient information, overcoming the interpretability bottleneck of travel prediction models. This method calculates the gradient information of input features relative to the prediction result, breaking through the limitations of traditional interpreters that only analyze static features. It can trace the contribution weights of key time points in historical travel sequences while maintaining low computational cost. The experimental results show that features with higher gradients significantly affect predictions—masking the maximum-gradient feature reduces accuracy by approximately 30%. Descending-order masking strategies exhibit the strongest impact, highlighting nonlinear interactions among features. Contribution maps visualize how gradients capture regular patterns and anomalies. The method proposed in this paper provides a valuable tool for understanding the underlying principles of travel prediction models, bridging the gap in existing methods for temporal sequence analysis.https://www.mdpi.com/2076-3417/15/10/5269interpretabilitygradient informationindividual travel predictiondeep learning model
spellingShingle Ziheng Su
Pengfei Zhang
Xiaohui Song
Yifan Li
Interpretability Study of Gradient Information in Individual Travel Prediction
Applied Sciences
interpretability
gradient information
individual travel prediction
deep learning model
title Interpretability Study of Gradient Information in Individual Travel Prediction
title_full Interpretability Study of Gradient Information in Individual Travel Prediction
title_fullStr Interpretability Study of Gradient Information in Individual Travel Prediction
title_full_unstemmed Interpretability Study of Gradient Information in Individual Travel Prediction
title_short Interpretability Study of Gradient Information in Individual Travel Prediction
title_sort interpretability study of gradient information in individual travel prediction
topic interpretability
gradient information
individual travel prediction
deep learning model
url https://www.mdpi.com/2076-3417/15/10/5269
work_keys_str_mv AT zihengsu interpretabilitystudyofgradientinformationinindividualtravelprediction
AT pengfeizhang interpretabilitystudyofgradientinformationinindividualtravelprediction
AT xiaohuisong interpretabilitystudyofgradientinformationinindividualtravelprediction
AT yifanli interpretabilitystudyofgradientinformationinindividualtravelprediction