Accelerating Energy Forecasting with Data Dimensionality Reduction in a Residential Environment

The non-stationary nature of energy data is a serious challenge for energy forecasting methods. Frequent model updates are necessary to adapt to distribution shifts and avoid performance degradation. However, retraining regression models with lookback windows large enough to capture energy patterns...

Full description

Saved in:
Bibliographic Details
Main Authors: Rafael Gonçalves, Diogo Magalhães, Rafael Teixeira, Mário Antunes, Diogo Gomes, Rui L. Aguiar
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Energies
Subjects:
Online Access:https://www.mdpi.com/1996-1073/18/7/1637
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The non-stationary nature of energy data is a serious challenge for energy forecasting methods. Frequent model updates are necessary to adapt to distribution shifts and avoid performance degradation. However, retraining regression models with lookback windows large enough to capture energy patterns is computationally expensive, as increasing the number of features leads to longer training times. To address this problem, we propose an approach that guarantees fast convergence through dimensionality reduction. Using a synthetic neighborhood dataset, we first validate three deep learning models—an artificial neural network (ANN), a 1D convolutional neural network (1D-CNN), and a long short-term memory (LSTM) network. Then, in order to mitigate the long training time, we apply principal component analysis (PCA) and a variational autoencoder (VAE) for feature reduction. As a way to ensure the suitability of the proposed models for a residential context, we also explore the trade-off between low error and training speed by considering three test scenarios: a global model, a local model for each building, and a global model that is fine-tuned for each building. Our results demonstrate that by selecting the optimal dimensionality reduction method and model architecture, it is possible to decrease the mean squared error (MSE) by up to 63% and accelerate training by up to 80%.
ISSN:1996-1073