Distributed Gaussian Processes With Uncertain Inputs
Gaussian Process regression is a powerful non-parametric approach that facilitates probabilistic uncertainty quantification in machine learning. Distributed Gaussian Process (DGP) methods offer scalable solutions by dividing data among multiple GP models (or “experts”). DGPs ha...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10756652/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Gaussian Process regression is a powerful non-parametric approach that facilitates probabilistic uncertainty quantification in machine learning. Distributed Gaussian Process (DGP) methods offer scalable solutions by dividing data among multiple GP models (or “experts”). DGPs have primarily been applied in contexts such as multi-agent systems, federated learning, Bayesian optimisation, and state estimation. However, existing research seldom addresses scenarios where the model inputs are uncertain — a situation that can arise in applications involving sensor noise or time-series modelling. Consequently, this paper investigates using a variant of DGP - a Generalised Product-of-Expert Gaussian Process - for the case where model inputs are uncertain. Three alternative approaches, and a theoretically optimal solution against which the approaches can be compared, are proposed. A simple simulated case study is then used to demonstrate that, in fact, neither approach can be guaranteed as optimal under all conditions. Therefore, the paper intends to provide a baseline and motivation for future work in applying DGP models to problems with uncertain inputs. |
|---|---|
| ISSN: | 2169-3536 |