Making Time Series Embeddings More Interpretable in Deep Learning
With the success of language models in deep learning, multiple new time series embeddings have been proposed. However, the interpretability of those representations is often still lacking compared to word embeddings. This paper tackles this issue, aiming to present some criteria for making time seri...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2023-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Subjects: | |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/133107 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849762918991659008 |
|---|---|
| author | Leonid Schwenke Martin Atzmueller |
| author_facet | Leonid Schwenke Martin Atzmueller |
| author_sort | Leonid Schwenke |
| collection | DOAJ |
| description | With the success of language models in deep learning, multiple new time series embeddings have been proposed. However, the interpretability of those representations is often still lacking compared to word embeddings. This paper tackles this issue, aiming to present some criteria for making time series embeddings applied in deep learning models more interpretable using higher-level features in symbolic form. For that, we investigate two different approaches for extracting symbolic approximation representations regarding the frequency and the trend information, i.e. the Symbolic Fourier Approximation (SFA) and the Symbolic Aggregate approXimation (SAX). In particular, we analyze and discuss the impact of applying the different representation approaches. Furthermore, in our experimentation, we apply a state-of-the-art Transformer model to demonstrate the efficacy of the proposed approach regarding explainability in a comprehensive evaluation using a large set of time series datasets. |
| format | Article |
| id | doaj-art-4f82d78202f74fd0b90a8b089038d32c |
| institution | DOAJ |
| issn | 2334-0754 2334-0762 |
| language | English |
| publishDate | 2023-05-01 |
| publisher | LibraryPress@UF |
| record_format | Article |
| series | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| spelling | doaj-art-4f82d78202f74fd0b90a8b089038d32c2025-08-20T03:05:35ZengLibraryPress@UFProceedings of the International Florida Artificial Intelligence Research Society Conference2334-07542334-07622023-05-013610.32473/flairs.36.13310769413Making Time Series Embeddings More Interpretable in Deep LearningLeonid Schwenke0https://orcid.org/0000-0002-2337-3905Martin Atzmueller1https://orcid.org/0000-0002-2480-6901Osnabrück UniversityOsnabr¨uck University & DFKIWith the success of language models in deep learning, multiple new time series embeddings have been proposed. However, the interpretability of those representations is often still lacking compared to word embeddings. This paper tackles this issue, aiming to present some criteria for making time series embeddings applied in deep learning models more interpretable using higher-level features in symbolic form. For that, we investigate two different approaches for extracting symbolic approximation representations regarding the frequency and the trend information, i.e. the Symbolic Fourier Approximation (SFA) and the Symbolic Aggregate approXimation (SAX). In particular, we analyze and discuss the impact of applying the different representation approaches. Furthermore, in our experimentation, we apply a state-of-the-art Transformer model to demonstrate the efficacy of the proposed approach regarding explainability in a comprehensive evaluation using a large set of time series datasets.https://journals.flvc.org/FLAIRS/article/view/133107deep learningsymbolic time series analysisexplainable embeddingssymbolic representationtransformerhigh-level feature extraction |
| spellingShingle | Leonid Schwenke Martin Atzmueller Making Time Series Embeddings More Interpretable in Deep Learning Proceedings of the International Florida Artificial Intelligence Research Society Conference deep learning symbolic time series analysis explainable embeddings symbolic representation transformer high-level feature extraction |
| title | Making Time Series Embeddings More Interpretable in Deep Learning |
| title_full | Making Time Series Embeddings More Interpretable in Deep Learning |
| title_fullStr | Making Time Series Embeddings More Interpretable in Deep Learning |
| title_full_unstemmed | Making Time Series Embeddings More Interpretable in Deep Learning |
| title_short | Making Time Series Embeddings More Interpretable in Deep Learning |
| title_sort | making time series embeddings more interpretable in deep learning |
| topic | deep learning symbolic time series analysis explainable embeddings symbolic representation transformer high-level feature extraction |
| url | https://journals.flvc.org/FLAIRS/article/view/133107 |
| work_keys_str_mv | AT leonidschwenke makingtimeseriesembeddingsmoreinterpretableindeeplearning AT martinatzmueller makingtimeseriesembeddingsmoreinterpretableindeeplearning |